Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
8,300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-mm', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-MM
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
8,301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Univariate plotting with pandas 单一变量
df.plot.bar() df.plot.line() df.plot.area() df.plot.hist()
Step1: 三分之一的酒来自加尼福尼亚
得分['points']分数越高越好 | Python Code:
reviews['province'].value_counts().head(10).plot.bar()
plt.show()
(reviews['province'].value_counts().head(10) / len(reviews)).plot.bar()
plt.show()
Explanation: Univariate plotting with pandas 单一变量
df.plot.bar() df.plot.line() df.plot.area() df.plot.hist()
End of explanation
reviews['points'].value_counts().sort_index().plot.bar()# 条形图
plt.show()
reviews['points'].value_counts().sort_index().plot.line()
plt.show()
reviews['points'].value_counts().sort_index().plot.area()
plt.show()
reviews['points'].plot.hist()# 直方图 Histograms
plt.show()
reviews[reviews['price'] < 200]['price'].plot.hist()
plt.show()
reviews['price'].plot.hist()
plt.show()
reviews[reviews['price'] > 1500]
Explanation: 三分之一的酒来自加尼福尼亚
得分['points']分数越高越好
End of explanation |
8,302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {w: i for i, w in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
punctuation_dict = {
'.' : '||Period||',
',' : '||Comma||',
'"' : '||Quotation_mark||',
';' : '||Semicolon||',
'!' : '||Exclamation_mark||',
'?' : '||Question_mark||',
'(' : '||Left_parentheses||',
')' : '||Right_parentheses||',
'--' : '||Dash||',
'\n' : '||Return||'
}
return punctuation_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
batch_size = 200
num_steps = 50
inputs = tf.placeholder(tf.int32, shape=[batch_size, num_steps], name='input')
targets = tf.placeholder(tf.int32, shape=[batch_size, num_steps], name='targets')
learningrate = tf.placeholder(tf.int32, name='learningrate')
return (inputs, targets, learningrate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm_size = 10
LSTM = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(LSTM,output_keep_prob=0.8)
cell = tf.contrib.rnn.MultiRNNCell([drop]*rnn_size)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed_dim = 200
embedded = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embedded)
weights = tf.Variable(tf.truncated_normal(shape=(rnn_size,vocab_size),mean=0.0,stddev=0.1))
biases = tf.Variable(tf.zeros(shape=[vocab_size]))
def mul_fn(current_input):
return tf.matmul(current_input, weights) + biases
logits = tf.map_fn(mul_fn, outputs)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
# Discard incomplete sequences at the end
full_seq = (len(int_text)//seq_length)
n_batches = (seq_length * full_seq)//batch_size
# Discard incomplete batches at the end
text = int_text[:n_batches*batch_size]
# PART BELOW WAS COPIED FROM EMBEDDINGS PROJECT; DOESN'T RETURN THE ARRAYS WE NEED
for idx in range(0, len(text), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return None, None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
8,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, here's the SPA power function
Step1: Here are two helper functions for computing the dot product over space, and for plotting the results
Step2: Let's do a quick example of using the above functions to represent two items at (2,1.3) and (0,0.7)
Step3: So, that lets us take a vector and turn it into a spatial map. Now let's try going the other way around
Step4: This can be treated as a least-sqares minimization problem. In paticular, we're trying to build the above map using a basis space. The basis vectors in that space are the spatial maps of the D unit vectors in our vector space!! So let's compute those, and use our standard nengo solver
Step5: Yay!
However, one possible problem with this approach is that the norm of this vector is unconstrained
Step6: A better solution would add a constraint on the norm. For that, we use cvxpy
Step7: Looks like the accuracy depends on what limit we put on the norm. Let's see how that varies
Step8: Looks like it works fine with norms that aren't too large.
As Aaron pointed out, we can also look at this basis space | Python Code:
def power(s, e):
x = np.fft.ifft(np.fft.fft(s.v) ** e).real
return spa.SemanticPointer(data=x)
Explanation: First, here's the SPA power function:
End of explanation
def spatial_dot(v, X, Y, xs, ys, transform=1):
if isinstance(v, spa.SemanticPointer):
v = v.v
vs = np.zeros((len(ys),len(xs)))
for i,x in enumerate(xs):
for j, y in enumerate(ys):
t = power(X, x)*power(Y,y)*transform
vs[j,i] = np.dot(v, t.v)
return vs
def spatial_plot(vs, colorbar=True, vmin=-1, vmax=1, cmap='plasma'):
vs = vs[::-1, :]
plt.imshow(vs, interpolation='none', extent=(xs[0],xs[-1],ys[0],ys[-1]), vmax=vmax, vmin=vmin, cmap=cmap)
if colorbar:
plt.colorbar()
Explanation: Here are two helper functions for computing the dot product over space, and for plotting the results
End of explanation
D = 64
X = spa.SemanticPointer(D)
X.make_unitary()
Y = spa.SemanticPointer(D)
Y.make_unitary()
xs = np.linspace(-3, 3, 50)
ys = np.linspace(-3, 3, 50)
v = power(X,2)*power(Y,1.3) + power(X,0)*power(Y,0.7)
vs = spatial_dot(v, X, Y, xs, ys)
spatial_plot(vs)
Explanation: Let's do a quick example of using the above functions to represent two items at (2,1.3) and (0,0.7)
End of explanation
desired = np.zeros((len(xs),len(ys)))
for i,x in enumerate(xs):
for j, y in enumerate(ys):
if 0<x<2 and -1<y<=3:
val = 1
else:
val = 0
desired[j, i] = val
spatial_plot(desired)
Explanation: So, that lets us take a vector and turn it into a spatial map. Now let's try going the other way around: specify a desired map, and find the vector that gives that.
End of explanation
A = np.array([spatial_dot(np.eye(D)[i], X, Y, xs, ys).flatten() for i in range(D)])
import nengo
v, info = nengo.solvers.LstsqL2(reg=0)(np.array(A).T, desired.flatten())
vs = spatial_dot(v, X, Y, xs, ys)
rmse = np.sqrt(np.mean((vs-desired)**2))
print(rmse)
spatial_plot(vs)
Explanation: This can be treated as a least-sqares minimization problem. In paticular, we're trying to build the above map using a basis space. The basis vectors in that space are the spatial maps of the D unit vectors in our vector space!! So let's compute those, and use our standard nengo solver:
End of explanation
np.linalg.norm(v)
Explanation: Yay!
However, one possible problem with this approach is that the norm of this vector is unconstrained:
End of explanation
import cvxpy as cvx
class CVXSolver(nengo.solvers.Solver):
def __init__(self, norm_limit):
super(CVXSolver, self).__init__(weights=False)
self.norm_limit = norm_limit
def __call__(self, A, Y, rng=np.random, E=None):
N = A.shape[1]
D = Y.shape[1]
d = cvx.Variable((N, D))
error = cvx.sum_squares(A * d - Y)
cvx_prob = cvx.Problem(cvx.Minimize(error), [cvx.norm(d) <= self.norm_limit])
cvx_prob.solve()
decoder = d.value
rmses = np.sqrt(np.mean((Y-np.dot(A, decoder))**2, axis=0))
return decoder, dict(rmses=rmses)
v2, info2 = CVXSolver(norm_limit=10)(np.array(A).T, desired.flatten().reshape(-1,1))
v2.shape = D,
vs2 = spatial_dot(v2, X, Y, xs, ys)
rmse2 = np.sqrt(np.mean((vs2-desired)**2))
print('rmse:', rmse2)
spatial_plot(vs2)
print('norm:', np.linalg.norm(v2))
Explanation: A better solution would add a constraint on the norm. For that, we use cvxpy
End of explanation
plt.figure(figsize=(10,4))
limits = np.arange(10)+1
for i, limit in enumerate(limits):
plt.subplot(2, 5, i+1)
vv, _ = CVXSolver(norm_limit=limit)(np.array(A).T, desired.flatten().reshape(-1,1))
s = spatial_dot(vv.flatten(), X, Y, xs, ys)
error = np.sqrt(np.mean((s-desired)**2))
spatial_plot(s, colorbar=False)
plt.title('norm: %g\nrmse: %1.2f' % (limit, error))
plt.xticks([])
plt.yticks([])
Explanation: Looks like the accuracy depends on what limit we put on the norm. Let's see how that varies:
End of explanation
import seaborn
SA = np.array(A).T # A matrix passed to solver
gamma = SA.T.dot(SA)
U, S, V = np.linalg.svd(gamma)
w = int(np.sqrt(D))
h = int(np.ceil(D // w))
plt.figure(figsize=(16, 16))
for i in range(len(U)):
# the columns of U are the left-singular vectors
vs = spatial_dot(U[:, i], X, Y, xs, ys)
plt.subplot(w, h, i+1)
spatial_plot(vs, colorbar=False, vmin=None, vmax=None, cmap=seaborn.diverging_palette(150, 275, s=80, l=55, as_cmap=True))
plt.title(r"$\sigma_{%d}(A^T A) = %d$" % (i+1, S[i]))
plt.xticks([])
plt.yticks([])
plt.show()
Explanation: Looks like it works fine with norms that aren't too large.
As Aaron pointed out, we can also look at this basis space:
End of explanation |
8,304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Locality Sensitive Hashing
Locality Sensitive Hashing (LSH) provides for a fast, efficient approximate nearest neighbor search. The algorithm scales well with respect to the number of data points as well as dimensions.
In this assignment, you will
* Implement the LSH algorithm for approximate nearest neighbor search
* Examine the accuracy for different documents by comparing against brute force search, and also contrast runtimes
* Explore the role of the algorithm’s tuning parameters in the accuracy of the method
Note to Amazon EC2 users
Step1: Upgrading to Scipy 0.16.0 or later. This assignment requires SciPy 0.16.0 or later. To upgrade, uncomment and run the following cell
Step2: Load in the Wikipedia dataset
Step3: For this assignment, let us assign a unique ID to each document.
Step4: Extract TF-IDF matrix
We first use GraphLab Create to compute a TF-IDF representation for each document.
Step6: For the remainder of the assignment, we will use sparse matrices. Sparse matrices are [matrices](https
Step7: The conversion should take a few minutes to complete.
Step8: Checkpoint
Step9: Train an LSH model
LSH performs an efficient neighbor search by randomly partitioning all reference data points into different bins. Today we will build a popular variant of LSH known as random binary projection, which approximates cosine distance. There are other variants we could use for other choices of distance metrics.
The first step is to generate a collection of random vectors from the standard Gaussian distribution.
Step10: To visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5.
Step11: We now generate random vectors of the same dimensionality as our vocubulary size (547979). Each vector can be used to compute one bit in the bin encoding. We generate 16 vectors, leading to a 16-bit encoding of the bin index for each document.
Step12: Next, we partition data points into bins. Instead of using explicit loops, we'd like to utilize matrix operations for greater efficiency. Let's walk through the construction step by step.
We'd like to decide which bin document 0 should go. Since 16 random vectors were generated in the previous cell, we have 16 bits to represent the bin index. The first bit is given by the sign of the dot product between the first random vector and the document's TF-IDF vector.
Step13: Similarly, the second bit is computed as the sign of the dot product between the second random vector and the document vector.
Step14: We can compute all of the bin index bits at once as follows. Note the absence of the explicit for loop over the 16 vectors. Matrix operations let us batch dot-product computation in a highly efficent manner, unlike the for loop construction. Given the relative inefficiency of loops in Python, the advantage of matrix operations is even greater.
Step15: All documents that obtain exactly this vector will be assigned to the same bin. We'd like to repeat the identical operation on all documents in the Wikipedia dataset and compute the corresponding bin indices. Again, we use matrix operations so that no explicit loop is needed.
Step16: We're almost done! To make it convenient to refer to individual bins, we convert each binary bin index into a single integer
Step17: Since it's the dot product again, we batch it with a matrix operation
Step18: This array gives us the integer index of the bins for all documents.
Now we are ready to complete the following function. Given the integer bin indices for the documents, you should compile a list of document IDs that belong to each bin. Since a list is to be maintained for each unique bin index, a dictionary of lists is used.
Compute the integer bin indices. This step is already completed.
For each document in the dataset, do the following
Step19: Checkpoint.
Step20: Note. We will be using the model trained here in the following sections, unless otherwise indicated.
Inspect bins
Let us look at some documents and see which bins they fall into.
Step21: Quiz Question. What is the document id of Barack Obama's article?
Quiz Question. Which bin contains Barack Obama's article? Enter its integer index.
Recall from the previous assignment that Joe Biden was a close neighbor of Barack Obama.
Step22: Quiz Question. Examine the bit representations of the bins containing Barack Obama and Joe Biden. In how many places do they agree?
16 out of 16 places (Barack Obama and Joe Biden fall into the same bin)
14 out of 16 places
12 out of 16 places
10 out of 16 places
8 out of 16 places
Compare the result with a former British diplomat, whose bin representation agrees with Obama's in only 8 out of 16 places.
Step23: How about the documents in the same bin as Barack Obama? Are they necessarily more similar to Obama than Biden? Let's look at which documents are in the same bin as the Barack Obama article.
Step24: There is four other documents that belong to the same bin. Which document are they?
Step25: It turns out that Joe Biden is much closer to Barack Obama than any of the four documents, even though Biden's bin representation differs from Obama's by 2 bits.
Step26: Moral of the story. Similar data points will in general tend to fall into nearby bins, but that's all we can say about LSH. In a high-dimensional space such as text features, we often get unlucky with our selection of only a few random vectors such that dissimilar data points go into the same bin while similar data points fall into different bins. Given a query document, we must consider all documents in the nearby bins and sort them according to their actual distances from the query.
Query the LSH model
Let us first implement the logic for searching nearby neighbors, which goes like this
Step28: With this output in mind, implement the logic for nearby bin search
Step29: Checkpoint. Running the function with search_radius=0 should yield the list of documents belonging to the same bin as the query.
Step30: Checkpoint. Running the function with search_radius=1 adds more documents to the fore.
Step31: Note. Don't be surprised if few of the candidates look similar to Obama. This is why we add as many candidates as our computational budget allows and sort them by their distance to the query.
Now we have a function that can return all the candidates from neighboring bins. Next we write a function to collect all candidates and compute their true distance to the query.
Step32: Let's try it out with Obama
Step33: To identify the documents, it's helpful to join this table with the Wikipedia table
Step34: We have shown that we have a working LSH implementation!
Experimenting with your LSH implementation
In the following sections we have implemented a few experiments so that you can gain intuition for how your LSH implementation behaves in different situations. This will help you understand the effect of searching nearby bins and the performance of LSH versus computing nearest neighbors using a brute force search.
Effect of nearby bin search
How does nearby bin search affect the outcome of LSH? There are three variables that are affected by the search radius
Step35: Notice that the top 10 query results become more relevant as the search radius grows. Let's plot the three variables
Step36: Some observations
Step37: The following cell will run LSH with multiple search radii and compute the quality metrics for each run. Allow a few minutes to complete.
Step38: The observations for Barack Obama generalize to the entire dataset.
Effect of number of random vectors
Let us now turn our focus to the remaining parameter | Python Code:
import numpy as np
import graphlab
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import norm
from sklearn.metrics.pairwise import pairwise_distances
import time
from copy import copy
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Locality Sensitive Hashing
Locality Sensitive Hashing (LSH) provides for a fast, efficient approximate nearest neighbor search. The algorithm scales well with respect to the number of data points as well as dimensions.
In this assignment, you will
* Implement the LSH algorithm for approximate nearest neighbor search
* Examine the accuracy for different documents by comparing against brute force search, and also contrast runtimes
* Explore the role of the algorithm’s tuning parameters in the accuracy of the method
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
End of explanation
# !conda upgrade -y scipy
Explanation: Upgrading to Scipy 0.16.0 or later. This assignment requires SciPy 0.16.0 or later. To upgrade, uncomment and run the following cell:
End of explanation
wiki = graphlab.SFrame('people_wiki.gl/')
Explanation: Load in the Wikipedia dataset
End of explanation
wiki = wiki.add_row_number()
wiki
Explanation: For this assignment, let us assign a unique ID to each document.
End of explanation
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
wiki
Explanation: Extract TF-IDF matrix
We first use GraphLab Create to compute a TF-IDF representation for each document.
End of explanation
def sframe_to_scipy(column):
Convert a dict-typed SArray into a SciPy sparse matrix.
Returns
-------
mat : a SciPy sparse matrix where mat[i, j] is the value of word j for document i.
mapping : a dictionary where mapping[j] is the word whose values are in column j.
# Create triples of (row_id, feature_id, count).
x = graphlab.SFrame({'X1':column})
# 1. Add a row number.
x = x.add_row_number()
# 2. Stack will transform x to have a row for each unique (row, key) pair.
x = x.stack('X1', ['feature', 'value'])
# Map words into integers using a OneHotEncoder feature transformation.
f = graphlab.feature_engineering.OneHotEncoder(features=['feature'])
# We first fit the transformer using the above data.
f.fit(x)
# The transform method will add a new column that is the transformed version
# of the 'word' column.
x = f.transform(x)
# Get the feature mapping.
mapping = f['feature_encoding']
# Get the actual word id.
x['feature_id'] = x['encoded_features'].dict_keys().apply(lambda x: x[0])
# Create numpy arrays that contain the data for the sparse matrix.
i = np.array(x['id'])
j = np.array(x['feature_id'])
v = np.array(x['value'])
width = x['id'].max() + 1
height = x['feature_id'].max() + 1
# Create a sparse matrix.
mat = csr_matrix((v, (i, j)), shape=(width, height))
return mat, mapping
Explanation: For the remainder of the assignment, we will use sparse matrices. Sparse matrices are [matrices](https://en.wikipedia.org/wiki/Matrix_(mathematics%29 ) that have a small number of nonzero entries. A good data structure for sparse matrices would only store the nonzero entries to save space and speed up computation. SciPy provides a highly-optimized library for sparse matrices. Many matrix operations available for NumPy arrays are also available for SciPy sparse matrices.
We first convert the TF-IDF column (in dictionary format) into the SciPy sparse matrix format.
End of explanation
start=time.time()
corpus, mapping = sframe_to_scipy(wiki['tf_idf'])
end=time.time()
print end-start
Explanation: The conversion should take a few minutes to complete.
End of explanation
assert corpus.shape == (59071, 547979)
print 'Check passed correctly!'
Explanation: Checkpoint: The following code block should return 'Check passed correctly', indicating that your matrix contains TF-IDF values for 59071 documents and 547979 unique words. Otherwise, it will return Error.
End of explanation
def generate_random_vectors(num_vector, dim):
return np.random.randn(dim, num_vector)
Explanation: Train an LSH model
LSH performs an efficient neighbor search by randomly partitioning all reference data points into different bins. Today we will build a popular variant of LSH known as random binary projection, which approximates cosine distance. There are other variants we could use for other choices of distance metrics.
The first step is to generate a collection of random vectors from the standard Gaussian distribution.
End of explanation
# Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix.
np.random.seed(0) # set seed=0 for consistent results
generate_random_vectors(num_vector=3, dim=5)
Explanation: To visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5.
End of explanation
# Generate 16 random vectors of dimension 547979
np.random.seed(0)
random_vectors = generate_random_vectors(num_vector=16, dim=547979)
random_vectors.shape
Explanation: We now generate random vectors of the same dimensionality as our vocubulary size (547979). Each vector can be used to compute one bit in the bin encoding. We generate 16 vectors, leading to a 16-bit encoding of the bin index for each document.
End of explanation
doc = corpus[0, :] # vector of tf-idf values for document 0
doc.dot(random_vectors[:, 0]) >= 0 # True if positive sign; False if negative sign
Explanation: Next, we partition data points into bins. Instead of using explicit loops, we'd like to utilize matrix operations for greater efficiency. Let's walk through the construction step by step.
We'd like to decide which bin document 0 should go. Since 16 random vectors were generated in the previous cell, we have 16 bits to represent the bin index. The first bit is given by the sign of the dot product between the first random vector and the document's TF-IDF vector.
End of explanation
doc.dot(random_vectors[:, 1]) >= 0 # True if positive sign; False if negative sign
Explanation: Similarly, the second bit is computed as the sign of the dot product between the second random vector and the document vector.
End of explanation
doc.dot(random_vectors) >= 0 # should return an array of 16 True/False bits
np.array(doc.dot(random_vectors) >= 0, dtype=int) # display index bits in 0/1's
Explanation: We can compute all of the bin index bits at once as follows. Note the absence of the explicit for loop over the 16 vectors. Matrix operations let us batch dot-product computation in a highly efficent manner, unlike the for loop construction. Given the relative inefficiency of loops in Python, the advantage of matrix operations is even greater.
End of explanation
corpus[0:2].dot(random_vectors) >= 0 # compute bit indices of first two documents
corpus.dot(random_vectors) >= 0 # compute bit indices of ALL documents
Explanation: All documents that obtain exactly this vector will be assigned to the same bin. We'd like to repeat the identical operation on all documents in the Wikipedia dataset and compute the corresponding bin indices. Again, we use matrix operations so that no explicit loop is needed.
End of explanation
doc = corpus[0, :] # first document
index_bits = (doc.dot(random_vectors) >= 0)
powers_of_two = (1 << np.arange(15, -1, -1))
print index_bits
print powers_of_two
print index_bits.dot(powers_of_two)
Explanation: We're almost done! To make it convenient to refer to individual bins, we convert each binary bin index into a single integer:
Bin index integer
[0,0,0,0,0,0,0,0,0,0,0,0] => 0
[0,0,0,0,0,0,0,0,0,0,0,1] => 1
[0,0,0,0,0,0,0,0,0,0,1,0] => 2
[0,0,0,0,0,0,0,0,0,0,1,1] => 3
...
[1,1,1,1,1,1,1,1,1,1,0,0] => 65532
[1,1,1,1,1,1,1,1,1,1,0,1] => 65533
[1,1,1,1,1,1,1,1,1,1,1,0] => 65534
[1,1,1,1,1,1,1,1,1,1,1,1] => 65535 (= 2^16-1)
By the rules of binary number representation, we just need to compute the dot product between the document vector and the vector consisting of powers of 2:
End of explanation
index_bits = corpus.dot(random_vectors) >= 0
index_bits.dot(powers_of_two)
Explanation: Since it's the dot product again, we batch it with a matrix operation:
End of explanation
def train_lsh(data, num_vector=16, seed=None):
dim = corpus.shape[1]
if seed is not None:
np.random.seed(seed)
random_vectors = generate_random_vectors(num_vector, dim)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
table = {}
# Partition data points into bins
bin_index_bits = (data.dot(random_vectors) >= 0)
# Encode bin index bits into integers
bin_indices = bin_index_bits.dot(powers_of_two)
# Update `table` so that `table[i]` is the list of document ids with bin index equal to i.
for data_index, bin_index in enumerate(bin_indices):
if bin_index not in table:
# If no list yet exists for this bin, assign the bin an empty list.
table[bin_index] = ... # YOUR CODE HERE
# Fetch the list of document ids associated with the bin and add the document id to the end.
... # YOUR CODE HERE
model = {'data': data,
'bin_index_bits': bin_index_bits,
'bin_indices': bin_indices,
'table': table,
'random_vectors': random_vectors,
'num_vector': num_vector}
return model
Explanation: This array gives us the integer index of the bins for all documents.
Now we are ready to complete the following function. Given the integer bin indices for the documents, you should compile a list of document IDs that belong to each bin. Since a list is to be maintained for each unique bin index, a dictionary of lists is used.
Compute the integer bin indices. This step is already completed.
For each document in the dataset, do the following:
Get the integer bin index for the document.
Fetch the list of document ids associated with the bin; if no list yet exists for this bin, assign the bin an empty list.
Add the document id to the end of the list.
End of explanation
model = train_lsh(corpus, num_vector=16, seed=143)
table = model['table']
if 0 in table and table[0] == [39583] and \
143 in table and table[143] == [19693, 28277, 29776, 30399]:
print 'Passed!'
else:
print 'Check your code.'
Explanation: Checkpoint.
End of explanation
wiki[wiki['name'] == 'Barack Obama']
Explanation: Note. We will be using the model trained here in the following sections, unless otherwise indicated.
Inspect bins
Let us look at some documents and see which bins they fall into.
End of explanation
wiki[wiki['name'] == 'Joe Biden']
Explanation: Quiz Question. What is the document id of Barack Obama's article?
Quiz Question. Which bin contains Barack Obama's article? Enter its integer index.
Recall from the previous assignment that Joe Biden was a close neighbor of Barack Obama.
End of explanation
wiki[wiki['name']=='Wynn Normington Hugh-Jones']
print np.array(model['bin_index_bits'][22745], dtype=int) # list of 0/1's
print model['bin_indices'][22745] # integer format
model['bin_index_bits'][35817] == model['bin_index_bits'][22745]
Explanation: Quiz Question. Examine the bit representations of the bins containing Barack Obama and Joe Biden. In how many places do they agree?
16 out of 16 places (Barack Obama and Joe Biden fall into the same bin)
14 out of 16 places
12 out of 16 places
10 out of 16 places
8 out of 16 places
Compare the result with a former British diplomat, whose bin representation agrees with Obama's in only 8 out of 16 places.
End of explanation
model['table'][model['bin_indices'][35817]]
Explanation: How about the documents in the same bin as Barack Obama? Are they necessarily more similar to Obama than Biden? Let's look at which documents are in the same bin as the Barack Obama article.
End of explanation
doc_ids = list(model['table'][model['bin_indices'][35817]])
doc_ids.remove(35817) # display documents other than Obama
docs = wiki.filter_by(values=doc_ids, column_name='id') # filter by id column
docs
Explanation: There is four other documents that belong to the same bin. Which document are they?
End of explanation
def cosine_distance(x, y):
xy = x.dot(y.T)
dist = xy/(norm(x)*norm(y))
return 1-dist[0,0]
obama_tf_idf = corpus[35817,:]
biden_tf_idf = corpus[24478,:]
print '================= Cosine distance from Barack Obama'
print 'Barack Obama - {0:24s}: {1:f}'.format('Joe Biden',
cosine_distance(obama_tf_idf, biden_tf_idf))
for doc_id in doc_ids:
doc_tf_idf = corpus[doc_id,:]
print 'Barack Obama - {0:24s}: {1:f}'.format(wiki[doc_id]['name'],
cosine_distance(obama_tf_idf, doc_tf_idf))
Explanation: It turns out that Joe Biden is much closer to Barack Obama than any of the four documents, even though Biden's bin representation differs from Obama's by 2 bits.
End of explanation
from itertools import combinations
num_vector = 16
search_radius = 3
for diff in combinations(range(num_vector), search_radius):
print diff
Explanation: Moral of the story. Similar data points will in general tend to fall into nearby bins, but that's all we can say about LSH. In a high-dimensional space such as text features, we often get unlucky with our selection of only a few random vectors such that dissimilar data points go into the same bin while similar data points fall into different bins. Given a query document, we must consider all documents in the nearby bins and sort them according to their actual distances from the query.
Query the LSH model
Let us first implement the logic for searching nearby neighbors, which goes like this:
1. Let L be the bit representation of the bin that contains the query documents.
2. Consider all documents in bin L.
3. Consider documents in the bins whose bit representation differs from L by 1 bit.
4. Consider documents in the bins whose bit representation differs from L by 2 bits.
...
To obtain candidate bins that differ from the query bin by some number of bits, we use itertools.combinations, which produces all possible subsets of a given list. See this documentation for details.
1. Decide on the search radius r. This will determine the number of different bits between the two vectors.
2. For each subset (n_1, n_2, ..., n_r) of the list [0, 1, 2, ..., num_vector-1], do the following:
* Flip the bits (n_1, n_2, ..., n_r) of the query bin to produce a new bit vector.
* Fetch the list of documents belonging to the bin indexed by the new bit vector.
* Add those documents to the candidate set.
Each line of output from the following cell is a 3-tuple indicating where the candidate bin would differ from the query bin. For instance,
(0, 1, 3)
indicates that the candiate bin differs from the query bin in first, second, and fourth bits.
End of explanation
def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()):
For a given query vector and trained LSH model, return all candidate neighbors for
the query among all bins within the given search radius.
Example usage
-------------
>>> model = train_lsh(corpus, num_vector=16, seed=143)
>>> q = model['bin_index_bits'][0] # vector for the first document
>>> candidates = search_nearby_bins(q, model['table'])
num_vector = len(query_bin_bits)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
# Allow the user to provide an initial set of candidates.
candidate_set = copy(initial_candidates)
for different_bits in combinations(range(num_vector), search_radius):
# Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector.
## Hint: you can iterate over a tuple like a list
alternate_bits = copy(query_bin_bits)
for i in different_bits:
alternate_bits[i] = ... # YOUR CODE HERE
# Convert the new bit vector to an integer index
nearby_bin = alternate_bits.dot(powers_of_two)
# Fetch the list of documents belonging to the bin indexed by the new bit vector.
# Then add those documents to candidate_set
# Make sure that the bin exists in the table!
# Hint: update() method for sets lets you add an entire list to the set
if nearby_bin in table:
... # YOUR CODE HERE: Update candidate_set with the documents in this bin.
return candidate_set
Explanation: With this output in mind, implement the logic for nearby bin search:
End of explanation
obama_bin_index = model['bin_index_bits'][35817] # bin index of Barack Obama
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0)
if candidate_set == set([35817, 21426, 53937, 39426, 50261]):
print 'Passed test'
else:
print 'Check your code'
print 'List of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261'
Explanation: Checkpoint. Running the function with search_radius=0 should yield the list of documents belonging to the same bin as the query.
End of explanation
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set)
if candidate_set == set([39426, 38155, 38412, 28444, 9757, 41631, 39207, 59050, 47773, 53937, 21426, 34547,
23229, 55615, 39877, 27404, 33996, 21715, 50261, 21975, 33243, 58723, 35817, 45676,
19699, 2804, 20347]):
print 'Passed test'
else:
print 'Check your code'
Explanation: Checkpoint. Running the function with search_radius=1 adds more documents to the fore.
End of explanation
def query(vec, model, k, max_search_radius):
data = model['data']
table = model['table']
random_vectors = model['random_vectors']
num_vector = random_vectors.shape[1]
# Compute bin index for the query vector, in bit representation.
bin_index_bits = (vec.dot(random_vectors) >= 0).flatten()
# Search nearby bins and collect candidates
candidate_set = set()
for search_radius in xrange(max_search_radius+1):
candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set)
# Sort candidates by their true distances from the query
nearest_neighbors = graphlab.SFrame({'id':candidate_set})
candidates = data[np.array(list(candidate_set)),:]
nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True), len(candidate_set)
Explanation: Note. Don't be surprised if few of the candidates look similar to Obama. This is why we add as many candidates as our computational budget allows and sort them by their distance to the query.
Now we have a function that can return all the candidates from neighboring bins. Next we write a function to collect all candidates and compute their true distance to the query.
End of explanation
query(corpus[35817,:], model, k=10, max_search_radius=3)
Explanation: Let's try it out with Obama:
End of explanation
query(corpus[35817,:], model, k=10, max_search_radius=3)[0].join(wiki[['id', 'name']], on='id').sort('distance')
Explanation: To identify the documents, it's helpful to join this table with the Wikipedia table:
End of explanation
wiki[wiki['name']=='Barack Obama']
num_candidates_history = []
query_time_history = []
max_distance_from_query_history = []
min_distance_from_query_history = []
average_distance_from_query_history = []
for max_search_radius in xrange(17):
start=time.time()
result, num_candidates = query(corpus[35817,:], model, k=10,
max_search_radius=max_search_radius)
end=time.time()
query_time = end-start
print 'Radius:', max_search_radius
print result.join(wiki[['id', 'name']], on='id').sort('distance')
average_distance_from_query = result['distance'][1:].mean()
max_distance_from_query = result['distance'][1:].max()
min_distance_from_query = result['distance'][1:].min()
num_candidates_history.append(num_candidates)
query_time_history.append(query_time)
average_distance_from_query_history.append(average_distance_from_query)
max_distance_from_query_history.append(max_distance_from_query)
min_distance_from_query_history.append(min_distance_from_query)
Explanation: We have shown that we have a working LSH implementation!
Experimenting with your LSH implementation
In the following sections we have implemented a few experiments so that you can gain intuition for how your LSH implementation behaves in different situations. This will help you understand the effect of searching nearby bins and the performance of LSH versus computing nearest neighbors using a brute force search.
Effect of nearby bin search
How does nearby bin search affect the outcome of LSH? There are three variables that are affected by the search radius:
* Number of candidate documents considered
* Query time
* Distance of approximate neighbors from the query
Let us run LSH multiple times, each with different radii for nearby bin search. We will measure the three variables as discussed above.
End of explanation
plt.figure(figsize=(7,4.5))
plt.plot(num_candidates_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('# of documents searched')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(query_time_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(average_distance_from_query_history, linewidth=4, label='Average of 10 neighbors')
plt.plot(max_distance_from_query_history, linewidth=4, label='Farthest of 10 neighbors')
plt.plot(min_distance_from_query_history, linewidth=4, label='Closest of 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance of neighbors')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: Notice that the top 10 query results become more relevant as the search radius grows. Let's plot the three variables:
End of explanation
def brute_force_query(vec, data, k):
num_data_points = data.shape[0]
# Compute distances for ALL data points in training set
nearest_neighbors = graphlab.SFrame({'id':range(num_data_points)})
nearest_neighbors['distance'] = pairwise_distances(data, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True)
Explanation: Some observations:
* As we increase the search radius, we find more neighbors that are a smaller distance away.
* With increased search radius comes a greater number documents that have to be searched. Query time is higher as a consequence.
* With sufficiently high search radius, the results of LSH begin to resemble the results of brute-force search.
Quiz Question. What was the smallest search radius that yielded the correct nearest neighbor, namely Joe Biden?
Quiz Question. Suppose our goal was to produce 10 approximate nearest neighbors whose average distance from the query document is within 0.01 of the average for the true 10 nearest neighbors. For Barack Obama, the true 10 nearest neighbors are on average about 0.77. What was the smallest search radius for Barack Obama that produced an average distance of 0.78 or better?
Quality metrics for neighbors
The above analysis is limited by the fact that it was run with a single query, namely Barack Obama. We should repeat the analysis for the entirety of data. Iterating over all documents would take a long time, so let us randomly choose 10 documents for our analysis.
For each document, we first compute the true 25 nearest neighbors, and then run LSH multiple times. We look at two metrics:
Precision@10: How many of the 10 neighbors given by LSH are among the true 25 nearest neighbors?
Average cosine distance of the neighbors from the query
Then we run LSH multiple times with different search radii.
End of explanation
max_radius = 17
precision = {i:[] for i in xrange(max_radius)}
average_distance = {i:[] for i in xrange(max_radius)}
query_time = {i:[] for i in xrange(max_radius)}
np.random.seed(0)
num_queries = 10
for i, ix in enumerate(np.random.choice(corpus.shape[0], num_queries, replace=False)):
print('%s / %s' % (i, num_queries))
ground_truth = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for r in xrange(1,max_radius):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=r)
end = time.time()
query_time[r].append(end-start)
# precision = (# of neighbors both in result and ground_truth)/10.0
precision[r].append(len(set(result['id']) & ground_truth)/10.0)
average_distance[r].append(result['distance'][1:].mean())
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(average_distance[i]) for i in xrange(1,17)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(precision[i]) for i in xrange(1,17)], linewidth=4, label='Precison@10')
plt.xlabel('Search radius')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(query_time[i]) for i in xrange(1,17)], linewidth=4, label='Query time')
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: The following cell will run LSH with multiple search radii and compute the quality metrics for each run. Allow a few minutes to complete.
End of explanation
precision = {i:[] for i in xrange(5,20)}
average_distance = {i:[] for i in xrange(5,20)}
query_time = {i:[] for i in xrange(5,20)}
num_candidates_history = {i:[] for i in xrange(5,20)}
ground_truth = {}
np.random.seed(0)
num_queries = 10
docs = np.random.choice(corpus.shape[0], num_queries, replace=False)
for i, ix in enumerate(docs):
ground_truth[ix] = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for num_vector in xrange(5,20):
print('num_vector = %s' % (num_vector))
model = train_lsh(corpus, num_vector, seed=143)
for i, ix in enumerate(docs):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=3)
end = time.time()
query_time[num_vector].append(end-start)
precision[num_vector].append(len(set(result['id']) & ground_truth[ix])/10.0)
average_distance[num_vector].append(result['distance'][1:].mean())
num_candidates_history[num_vector].append(num_candidates)
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(average_distance[i]) for i in xrange(5,20)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('# of random vectors')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(precision[i]) for i in xrange(5,20)], linewidth=4, label='Precison@10')
plt.xlabel('# of random vectors')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(query_time[i]) for i in xrange(5,20)], linewidth=4, label='Query time (seconds)')
plt.xlabel('# of random vectors')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(num_candidates_history[i]) for i in xrange(5,20)], linewidth=4,
label='# of documents searched')
plt.xlabel('# of random vectors')
plt.ylabel('# of documents searched')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: The observations for Barack Obama generalize to the entire dataset.
Effect of number of random vectors
Let us now turn our focus to the remaining parameter: the number of random vectors. We run LSH with different number of random vectors, ranging from 5 to 20. We fix the search radius to 3.
Allow a few minutes for the following cell to complete.
End of explanation |
8,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setting the EEG reference
This tutorial describes how to set or change the EEG reference in MNE-Python.
Step1: Background
EEG measures a voltage (difference in electric potential) between each
electrode and a reference electrode. This means that whatever signal is
present at the reference electrode is effectively subtracted from all the
measurement electrodes. Therefore, an ideal reference signal is one that
captures none of the brain-specific fluctuations in electric potential,
while capturing all of the environmental noise/interference that is being
picked up by the measurement electrodes.
In practice, this means that the reference electrode is often placed in a
location on the subject's body and close to their head (so that any
environmental interference affects the reference and measurement electrodes
similarly) but as far away from the neural sources as possible (so that the
reference signal doesn't pick up brain-based fluctuations). Typical reference
locations are the subject's earlobe, nose, mastoid process, or collarbone.
Each of these has advantages and disadvantages regarding how much brain
signal it picks up (e.g., the mastoids pick up a fair amount compared to the
others), and regarding the environmental noise it picks up (e.g., earlobe
electrodes may shift easily, and have signals more similar to electrodes on
the same side of the head).
Even in cases where no electrode is specifically designated as the reference,
EEG recording hardware will still treat one of the scalp electrodes as the
reference, and the recording software may or may not display it to you (it
might appear as a completely flat channel, or the software might subtract out
the average of all signals before displaying, making it look like there is
no reference).
Setting or changing the reference channel
If you want to recompute your data with a different reference than was used
when the raw data were recorded and/or saved, MNE-Python provides the
Step2: If a scalp electrode was used as reference but was not saved alongside the
raw data (reference channels often aren't), you may wish to add it back to
the dataset before re-referencing. For example, if your EEG system recorded
with channel Fp1 as the reference but did not include Fp1 in the data
file, using
Step3: By default,
Step4: .. KEEP THESE BLOCKS SEPARATE SO FIGURES ARE BIG ENOUGH TO READ
Step5: Notice that the new reference (EEG 050) is now flat, while the original
reference channel that we added back to the data (EEG 999) has a non-zero
signal. Notice also that EEG 053 (which is marked as "bad" in
raw.info['bads']) is not affected by the re-referencing.
Setting average reference
To set a "virtual reference" that is the average of all channels, you can use
Step6: Creating the average reference as a projector
If using an average reference, it is possible to create the reference as a
Step7: Creating the average reference as a projector has a few advantages | Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
raw.pick(['EEG 0{:02}'.format(n) for n in range(41, 60)])
Explanation: Setting the EEG reference
This tutorial describes how to set or change the EEG reference in MNE-Python.
:depth: 2
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping it to save memory. Since
this tutorial deals specifically with EEG, we'll also restrict the dataset to
just a few EEG channels so the plots are easier to see:
End of explanation
# code lines below are commented out because the sample data doesn't have
# earlobe or mastoid channels, so this is just for demonstration purposes:
# use a single channel reference (left earlobe)
# raw.set_eeg_reference(ref_channels=['A1'])
# use average of mastoid channels as reference
# raw.set_eeg_reference(ref_channels=['M1', 'M2'])
Explanation: Background
EEG measures a voltage (difference in electric potential) between each
electrode and a reference electrode. This means that whatever signal is
present at the reference electrode is effectively subtracted from all the
measurement electrodes. Therefore, an ideal reference signal is one that
captures none of the brain-specific fluctuations in electric potential,
while capturing all of the environmental noise/interference that is being
picked up by the measurement electrodes.
In practice, this means that the reference electrode is often placed in a
location on the subject's body and close to their head (so that any
environmental interference affects the reference and measurement electrodes
similarly) but as far away from the neural sources as possible (so that the
reference signal doesn't pick up brain-based fluctuations). Typical reference
locations are the subject's earlobe, nose, mastoid process, or collarbone.
Each of these has advantages and disadvantages regarding how much brain
signal it picks up (e.g., the mastoids pick up a fair amount compared to the
others), and regarding the environmental noise it picks up (e.g., earlobe
electrodes may shift easily, and have signals more similar to electrodes on
the same side of the head).
Even in cases where no electrode is specifically designated as the reference,
EEG recording hardware will still treat one of the scalp electrodes as the
reference, and the recording software may or may not display it to you (it
might appear as a completely flat channel, or the software might subtract out
the average of all signals before displaying, making it look like there is
no reference).
Setting or changing the reference channel
If you want to recompute your data with a different reference than was used
when the raw data were recorded and/or saved, MNE-Python provides the
:meth:~mne.io.Raw.set_eeg_reference method on :class:~mne.io.Raw objects
as well as the :func:mne.add_reference_channels function. To use an
existing channel as the new reference, use the
:meth:~mne.io.Raw.set_eeg_reference method; you can also designate multiple
existing electrodes as reference channels, as is sometimes done with mastoid
references:
End of explanation
raw.plot()
Explanation: If a scalp electrode was used as reference but was not saved alongside the
raw data (reference channels often aren't), you may wish to add it back to
the dataset before re-referencing. For example, if your EEG system recorded
with channel Fp1 as the reference but did not include Fp1 in the data
file, using :meth:~mne.io.Raw.set_eeg_reference to set (say) Cz as the
new reference will then subtract out the signal at Cz without restoring
the signal at Fp1. In this situation, you can add back Fp1 as a flat
channel prior to re-referencing using :func:~mne.add_reference_channels.
(Since our example data doesn't use the 10-20 electrode naming system_, the
example below adds EEG 999 as the missing reference, then sets the
reference to EEG 050.) Here's how the data looks in its original state:
End of explanation
# add new reference channel (all zero)
raw_new_ref = mne.add_reference_channels(raw, ref_channels=['EEG 999'])
raw_new_ref.plot()
Explanation: By default, :func:~mne.add_reference_channels returns a copy, so we can go
back to our original raw object later. If you wanted to alter the
existing :class:~mne.io.Raw object in-place you could specify
copy=False.
End of explanation
# set reference to `EEG 050`
raw_new_ref.set_eeg_reference(ref_channels=['EEG 050'])
raw_new_ref.plot()
Explanation: .. KEEP THESE BLOCKS SEPARATE SO FIGURES ARE BIG ENOUGH TO READ
End of explanation
# use the average of all channels as reference
raw_avg_ref = raw.copy().set_eeg_reference(ref_channels='average')
raw_avg_ref.plot()
Explanation: Notice that the new reference (EEG 050) is now flat, while the original
reference channel that we added back to the data (EEG 999) has a non-zero
signal. Notice also that EEG 053 (which is marked as "bad" in
raw.info['bads']) is not affected by the re-referencing.
Setting average reference
To set a "virtual reference" that is the average of all channels, you can use
:meth:~mne.io.Raw.set_eeg_reference with ref_channels='average'. Just
as above, this will not affect any channels marked as "bad", nor will it
include bad channels when computing the average. However, it does modify the
:class:~mne.io.Raw object in-place, so we'll make a copy first so we can
still go back to the unmodified :class:~mne.io.Raw object later:
End of explanation
raw.set_eeg_reference('average', projection=True)
print(raw.info['projs'])
Explanation: Creating the average reference as a projector
If using an average reference, it is possible to create the reference as a
:term:projector rather than subtracting the reference from the data
immediately by specifying projection=True:
End of explanation
for title, proj in zip(['Original', 'Average'], [False, True]):
fig = raw.plot(proj=proj, n_channels=len(raw))
# make room for title
fig.subplots_adjust(top=0.9)
fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')
Explanation: Creating the average reference as a projector has a few advantages:
It is possible to turn projectors on or off when plotting, so it is easy
to visualize the effect that the average reference has on the data.
If additional channels are marked as "bad" or if a subset of channels are
later selected, the projector will be re-computed to take these changes
into account (thus guaranteeing that the signal is zero-mean).
If there are other unapplied projectors affecting the EEG channels (such
as SSP projectors for removing heartbeat or blink artifacts), EEG
re-referencing cannot be performed until those projectors are either
applied or removed; adding the EEG reference as a projector is not subject
to that constraint. (The reason this wasn't a problem when we applied the
non-projector average reference to raw_avg_ref above is that the
empty-room projectors included in the sample data :file:.fif file were
only computed for the magnetometers.)
End of explanation |
8,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Dictionary
Step2: Feature Matrix From Dictionary
Step3: View column names | Python Code:
# Load library
from sklearn.feature_extraction import DictVectorizer
Explanation: Title: Converting A Dictionary Into A Matrix
Slug: converting_a_dictionary_into_a_matrix
Summary: How to convert a dictionary into a feature matrix for machine learning in Python.
Date: 2016-09-06 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
Preliminaries
End of explanation
# Our dictionary of data
data_dict = [{'Red': 2, 'Blue': 4},
{'Red': 4, 'Blue': 3},
{'Red': 1, 'Yellow': 2},
{'Red': 2, 'Yellow': 2}]
Explanation: Create Dictionary
End of explanation
# Create DictVectorizer object
dictvectorizer = DictVectorizer(sparse=False)
# Convert dictionary into feature matrix
features = dictvectorizer.fit_transform(data_dict)
# View feature matrix
features
Explanation: Feature Matrix From Dictionary
End of explanation
# View feature matrix column names
dictvectorizer.get_feature_names()
Explanation: View column names
End of explanation |
8,307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.soft - Calcul numérique et Cython - correction
Step1: Exercice
Step2: solution avec notebook
Les préliminaires
Step3: Puis
Step6: solution sans notebook
Step7: La version Cython est 10 fois plus rapide. Et cela ne semble pas dépendre de la dimension du problème. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.soft - Calcul numérique et Cython - correction
End of explanation
def distance_edition(mot1, mot2):
dist = { (-1,-1): 0 }
for i,c in enumerate(mot1) :
dist[i,-1] = dist[i-1,-1] + 1
dist[-1,i] = dist[-1,i-1] + 1
for j,d in enumerate(mot2) :
opt = [ ]
if (i-1,j) in dist :
x = dist[i-1,j] + 1
opt.append(x)
if (i,j-1) in dist :
x = dist[i,j-1] + 1
opt.append(x)
if (i-1,j-1) in dist :
x = dist[i-1,j-1] + (1 if c != d else 0)
opt.append(x)
dist[i,j] = min(opt)
return dist[len(mot1)-1,len(mot2)-1]
%timeit distance_edition("idstzance","distances")
Explanation: Exercice : python/C appliqué à une distance d'édition
On reprend la fonction donnée dans l'énoncé.
End of explanation
%load_ext cython
Explanation: solution avec notebook
Les préliminaires :
End of explanation
%%cython --annotate
cimport cython
def cidistance_edition(str mot1, str mot2):
cdef int dist [500][500]
cdef int cost, c
cdef int l1 = len(mot1)
cdef int l2 = len(mot2)
dist[0][0] = 0
for i in range(l1):
dist[i+1][0] = dist[i][0] + 1
dist[0][i+1] = dist[0][i] + 1
for j in range(l2):
cost = dist[i][j+1] + 1
c = dist[i+1][j] + 1
if c < cost : cost = c
c = dist[i][j]
if mot1[i] != mot2[j] : c += 1
if c < cost : cost = c
dist[i+1][j+1] = cost
cost = dist[l1][l2]
return cost
mot1, mot2 = "idstzance","distances"
%timeit cidistance_edition(mot1, mot2)
Explanation: Puis :
End of explanation
import sys
from pyquickhelper.loghelper import run_cmd
code =
def cdistance_edition(str mot1, str mot2):
cdef int dist [500][500]
cdef int cost, c
cdef int l1 = len(mot1)
cdef int l2 = len(mot2)
dist[0][0] = 0
for i in range(l1):
dist[i+1][0] = dist[i][0] + 1
dist[0][i+1] = dist[0][i] + 1
for j in range(l2):
cost = dist[i][j+1] + 1
c = dist[i+1][j] + 1
if c < cost : cost = c
c = dist[i][j]
if mot1[i] != mot2[j] : c += 1
if c < cost : cost = c
dist[i+1][j+1] = cost
cost = dist[l1][l2]
return cost
name = "cedit_distance"
with open(name + ".pyx","w") as f : f.write(code)
setup_code =
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("__NAME__.pyx",
compiler_directives={'language_level' : "3"})
)
.replace("__NAME__",name)
with open("setup.py","w") as f:
f.write(setup_code)
cmd = "{0} setup.py build_ext --inplace".format(sys.executable)
out,err = run_cmd(cmd)
if err is not None and err != '':
raise Exception(err)
import pyximport
pyximport.install()
import cedit_distance
from cedit_distance import cdistance_edition
mot1, mot2 = "idstzance","distances"
%timeit cdistance_edition(mot1, mot2)
Explanation: solution sans notebook
End of explanation
mot1 = mot1 * 10
mot2 = mot2 * 10
%timeit distance_edition(mot1,mot2)
%timeit cdistance_edition(mot1, mot2)
Explanation: La version Cython est 10 fois plus rapide. Et cela ne semble pas dépendre de la dimension du problème.
End of explanation |
8,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 2
Step1: In this chapter, we show an example of image clustering. A deep feature (VGG16 fc6 activation) is extracted from each image using Keras, then the features are clustered using PQk-means.
First, let's read images from the CIFAR10 dataset.
Step2: When you run the above cell for the first time, this would take several minutes to download the dataset to your local space (typically ~/.keras/datasets).
The CIFAR10 dataset contains small color images, where each image is uint8 RGB 32x32 array. The shape of img_train is (50000, 32, 32, 3), and that of img_test is (10000, 32, 32, 3). Let's see some of them.
Step3: To train a PQ-encoder, we pick up the top 1000 images from img_train. The clustering will be run on the top 5000 images from img_test.
Step4: 2. Extract a deep feature (VGG16 fc6 activation) from each image using Keras
Next, let us extract a 4096-dimensional deep feature from each image. For the feature extactor, we employ an activation from the 6th full connected layer (in Keras implementation, it is called fc1) of the ImageNet pre-trained VGG16 model. See the tutorial of keras for more details.
Step5: For the first time, this also takes several minutes to download the ImageNet pre-trained weights.
Let us extract features from images as follows. This takes several minutes using a usual GPU such as GTX1080.
Step6: Now we have a set of 4096D features for both the train-dataset and the test-dataset. Note that features_train[0] is an image descriptor for img_train[0]
3. Run clustering on deep features
Let us train a PQ-encoder using the training dataset, and compress the deep features into PQ-codes
Step7: 4. Visualize the result of image clustering
Now we can visualize image clusters. As can be seen, each cluster has similar images such as "horses", "cars", etc. | Python Code:
import numpy
import pqkmeans
import tqdm
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Chapter 2: Image clustering
This chapter contains the followings:
Read images from the CIFAR10 dataset
Extract a deep feature (VGG16 fc6 activation) from each image using Keras
Run clustering on deep features
Visualize the result of image clustering
Requisites:
- numpy
- pqkmeans
- keras
- tqdm
- scipy
- matplotlib
1. Read images from the CIFAR10 dataset
End of explanation
from keras.datasets import cifar10
(img_train, _), (img_test, _) = cifar10.load_data()
Explanation: In this chapter, we show an example of image clustering. A deep feature (VGG16 fc6 activation) is extracted from each image using Keras, then the features are clustered using PQk-means.
First, let's read images from the CIFAR10 dataset.
End of explanation
print("The first image of img_train:\n")
plt.imshow(img_train[0])
Explanation: When you run the above cell for the first time, this would take several minutes to download the dataset to your local space (typically ~/.keras/datasets).
The CIFAR10 dataset contains small color images, where each image is uint8 RGB 32x32 array. The shape of img_train is (50000, 32, 32, 3), and that of img_test is (10000, 32, 32, 3). Let's see some of them.
End of explanation
img_train = img_train[0:1000]
img_test = img_test[0:5000]
print("img_train.shape:\n{}".format(img_train.shape))
print("img_test.shape:\n{}".format(img_test.shape))
Explanation: To train a PQ-encoder, we pick up the top 1000 images from img_train. The clustering will be run on the top 5000 images from img_test.
End of explanation
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.models import Model
from scipy.misc import imresize
base_model = VGG16(weights='imagenet') # Read the ImageNet pre-trained VGG16 model
model = Model(inputs=base_model.input, outputs=base_model.get_layer('fc1').output) # We use the output from the 'fc1' layer
def extract_feature(model, img):
# This function takes a RGB image (np.array with the size (H, W, 3)) as an input, then return a 4096D feature vector.
# Note that this can be accelerated by batch-processing.
x = imresize(img, (224, 224)) # Resize to 224x224 since the VGG takes this size as an input
x = numpy.float32(x) # Convert from uint8 to float32
x = numpy.expand_dims(x, axis=0) # Convert the shape from (224, 224) to (1, 224, 224)
x = preprocess_input(x) # Subtract the average value of ImagNet.
feature = model.predict(x)[0] # Extract a feature, then reshape from (1, 4096) to (4096, )
feature /= numpy.linalg.norm(feature) # Normalize the feature.
return feature
Explanation: 2. Extract a deep feature (VGG16 fc6 activation) from each image using Keras
Next, let us extract a 4096-dimensional deep feature from each image. For the feature extactor, we employ an activation from the 6th full connected layer (in Keras implementation, it is called fc1) of the ImageNet pre-trained VGG16 model. See the tutorial of keras for more details.
End of explanation
features_train = numpy.array([extract_feature(model, img) for img in tqdm.tqdm(img_train)])
features_test = numpy.array([extract_feature(model, img) for img in tqdm.tqdm(img_test)])
print("features_train.shape:\n{}".format(features_train.shape))
print("features_test.shape:\n{}".format(features_test.shape))
Explanation: For the first time, this also takes several minutes to download the ImageNet pre-trained weights.
Let us extract features from images as follows. This takes several minutes using a usual GPU such as GTX1080.
End of explanation
# Train an encoder
encoder = pqkmeans.encoder.PQEncoder(num_subdim=4, Ks=256)
encoder.fit(features_train)
# Encode the deep features to PQ-codes
pqcodes_test = encoder.transform(features_test)
print("pqcodes_test.shape:\n{}".format(pqcodes_test.shape))
# Run clustering
K = 10
print("Runtime of clustering:")
%time clustered = pqkmeans.clustering.PQKMeans(encoder=encoder, k=K).fit_predict(pqcodes_test)
Explanation: Now we have a set of 4096D features for both the train-dataset and the test-dataset. Note that features_train[0] is an image descriptor for img_train[0]
3. Run clustering on deep features
Let us train a PQ-encoder using the training dataset, and compress the deep features into PQ-codes
End of explanation
for k in range(K):
print("Cluster id: k={}".format(k))
img_ids = [img_id for img_id, cluster_id in enumerate(clustered) if cluster_id == k]
cols = 10
img_ids = img_ids[0:cols] if cols < len(img_ids) else img_ids # Let's see the top 10 results
# Visualize images assigned to this cluster
imgs = img_test[img_ids]
plt.figure(figsize=(20, 5))
for i, img in enumerate(imgs):
plt.subplot(1, cols, i + 1)
plt.imshow(img)
plt.show()
Explanation: 4. Visualize the result of image clustering
Now we can visualize image clusters. As can be seen, each cluster has similar images such as "horses", "cars", etc.
End of explanation |
8,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problème 1
Visualiser les isothermes de Freundlich et de Langmuir
Step1: Problème 2
Lisser des données simulées avec la fonction d'isotherme de Langmuir.
Créer des données simulées
Step2: Lisser les données
1- Estimer des paramètres de départ
2- Lisser avec la fonciton curve_fit, incluse dans la librairie scipy, sous la section optimize | Python Code:
%pylab inline
def freundlich(C, kp, b):
S = kp*C**b
return(S)
def langmuir(C, Smax, kp):
S = C*kp*Smax/(1+kp*C)
return(S)
conc = linspace(num = 11, start = 0, stop = 10, endpoint=True)
S_freundlich1 = freundlich(C = conc, kp = 1, b = 0.1)
S_freundlich2 = freundlich(C = conc, kp = 1, b = 0.5)
S_freundlich3 = freundlich(C = conc, kp = 1, b = 1)
plot(conc, S_freundlich1, 'r-o',
conc, S_freundlich2, 'b-o',
conc, S_freundlich3, 'g-o')
S_freundlich1 = freundlich(C = conc, kp = 1, b = 0.2)
S_freundlich2 = freundlich(C = conc, kp = 2, b = 0.2)
S_freundlich3 = freundlich(C = conc, kp = 3, b = 0.2)
plot(conc, S_freundlich1, 'r-o',
conc, S_freundlich2, 'b-o',
conc, S_freundlich3, 'g-o')
S_langmuir1 = langmuir(C = conc, kp = 0.5, Smax = 0.5)
S_langmuir2 = langmuir(C = conc, kp = 1, Smax = 0.5)
S_langmuir3 = langmuir(C = conc, kp = 2, Smax = 0.5)
S_langmuir4 = langmuir(C = conc, kp = 5, Smax = 0.5)
S_langmuir5 = langmuir(C = conc, kp = 10, Smax = 0.5)
plot(conc, S_langmuir1, 'r-o',
conc, S_langmuir2, 'b-o',
conc, S_langmuir3, 'g-o',
conc, S_langmuir4, 'y-o',
conc, S_langmuir5, 'c-o')
Explanation: Problème 1
Visualiser les isothermes de Freundlich et de Langmuir
End of explanation
data = abs(S_langmuir2 + 0.08 * np.random.randn(11))
plot(conc, data, 'ro')
conc
Explanation: Problème 2
Lisser des données simulées avec la fonction d'isotherme de Langmuir.
Créer des données simulées
End of explanation
S_guess = langmuir(C = conc, Smax = 0.5, b = 1)
plot(conc, data, 'ro',
conc, S_guess, 'b-')
from scipy.optimize import curve_fit
fit = curve_fit(langmuir, conc, data, p0=(1, 0.5))
fit
S_fit = langmuir(C = conc, Smax = fit[0][0], kp = fit[0][1])
plot(conc, data, 'ro',
conc, S_fit, 'b-')
Explanation: Lisser les données
1- Estimer des paramètres de départ
2- Lisser avec la fonciton curve_fit, incluse dans la librairie scipy, sous la section optimize
End of explanation |
8,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Datenmodell
Beschreibung der Domäne, die auf Basis der relationalen DB gewünscht wird
Step1: Lesen der Tarif-Informationen
Step2: Die Tarif-Informationen aufnehmen
Vorgehensweise
Step3: Header für die Tarif-Elemente
categoryID
Step4: Lesen der Fahrzeuginformationen
Step5: Aufnehmen der Fahrzeuginformationen
Die Fahrzeuge haben sehr viele Eigenschaften. Nur einige davon eignen sich auch als Kategorisierungsmerkmal
Step6: Ausgeben der Quelldateien für Neo4J-Import
Nur als Node
Step7: Lesen der Stationsinformationen
Step8: Analyse
Step9: Analyse
Step10: Analyse
Step11: Analyse
Step12: Header für die Station-Elemente
rentalZoneID
Step13: Lesen der Buchungsinformationen
Step14: Aufnehmen der Buchungsinformationen
Step15: Technical income channel
Möglicher Umgang mit den Informationen über die Quelle (Technical income channel) der Bestellung | Python Code:
import pandas as pd
import numpy as np
def writeDsvFile(df, typeName, delimiter, columnsList, headerList):
filename = './output/' + typeName + '.dsv'
df.to_csv(filename, index = False, sep = delimiter, columns = columnsList, header = headerList)
def trimName(longName):
trimmedName = ''
if longName == np.nan:
trimmedName = 'nan'
else:
#print(longName)
#print(longName.split())
#print(longName.split()[0])
#print(longName[:longName.find(" ")])
trimmedName = repr(longName).replace(' ', '_').replace('(','').replace(')','').replace('/','')
#print(trimmedName)
return trimmedName
def getLabel(row, nodeName):
switcher = {
"CATEGORY": "TARIF",
"PARENT_CATEGORY": "HAUPTTARIF",
"BOOKING": "BUCHUNG",
"VEHICLE": "FAHRZEUG",
"RENTAL_ZONE": "STATION"
}
label = switcher.get(nodeName, 'OBJ')
if nodeName == 'BOOKING':
label += ";" + str(row["PARENT_CATEGORY"])
elif nodeName == 'VEHICLE':
label += \
";" + str(row["VEHICLE_MODEL_TYPE"].upper()) + \
";" + str(row["VEHICLE_MANUFACTURER_NAME"].upper()) + \
";" + str(row["FUEL_TYPE_NAME"].upper());
elif nodeName == 'RENTAL_ZONE':
label += ";" + str(row["TYPE"].upper())
label += ";ACTIVE" if str(row["ACTIVE_X"].upper()) == "JA" else ";INACTIVE"
return label
def getRelationshipType(row, types):
switcher = {
"CATEGORY": {
"PARENT_CATEGORY": "GEHOERT_ZU",
},
"BOOKING": {
"VEHICLE": "BEZIEHT_SICH",
},
}
relType = switcher.get(types[0], {}).get(types[1], 'UNDEFINED')
return relType
Explanation: Datenmodell
Beschreibung der Domäne, die auf Basis der relationalen DB gewünscht wird:
Über unterschiedliche (auch in Stationen) können Buchungen betätigt werden.
Jede Station hat einen Betreiber
In einer Buchung wird ein Fahrzeug ausgeliehen.
Das Fahrzeug wird von einer Station abgeholt und in einer Station abgegeben
Jede Buchung ist einer Tarifklasse zugeordnet.
Fahrzeuge haben Eigenschaften
Stationen haben Eigenschaften
Tarifklassen haben Eigenschaften
Buchungen haben Eigenschaften
Entscheidungen
Fahrzeug-[wird abgeholt]->Station
Fahrzeug-[wird abgegeben]->Station
Buchung-[bezieht sich]->Fahrzeug
Firma-[betreibt]->Station
Firma-[gehoert zu]-> Gruppe
Buchung-[hat]->Tarifklasse
Tarifklasse-[ist untergeordnet zu]->Haupttarif
Fahrzeuge unterscheiden sich durch
Marke
Kraftstoff
Buchungen unterscheiden sich durch
Tarifklasse
Fahrstrecke
Buchungsquelle
Tarifklassen unterscheiden sich durch
Haupttarif
End of explanation
df_cat = pd.read_csv('./datasets/OPENDATA_CATEGORY_CARSHARING.csv', quotechar='"',encoding ='utf-8', sep=';')
df_cat.head()
Explanation: Lesen der Tarif-Informationen
End of explanation
df_cat['LABEL'] = df_cat.apply(getLabel, axis=1, nodeName='CATEGORY')
df_cat.head()
df_cat['PARENT_CATEGORY'] = df_cat.agg({'CATEGORY' : lambda x: x.split()[0].replace('/', '')})
df_cat.head()
df_cat['PARENT_LABEL'] = df_cat.apply(getLabel, axis=1, nodeName='PARENT_CATEGORY')
df_cat.head()
df_cat['PARENT_ID'] = df_cat.agg({'HAL_ID' : lambda x: x + 10000})
df_cat.head()
df_cat['REL_TYPE_CAT_PCAT'] = df_cat.apply(getRelationshipType, axis=1, types=["CATEGORY","PARENT_CATEGORY"])
df_cat.head()
Explanation: Die Tarif-Informationen aufnehmen
Vorgehensweise:
- Wir wollen auch ein Label für die Buchungen realisieren, um für die Analyse eine einfacheres Mittel zu haben.
- Aus dem Grund macht es Sinn, neben den Tarifen auch Haupttarife zu definieren, die dann in einer Buchung als Label aufgeführt wird.
- Deswegen sind folgende Schritte notwendig
- Erster Schritt ist die Reduzierung der Tarifbezeichung auf ein Wort (als Name des Haupttarifs)
- Zweiter Schritt ist die Aufnahme in Buchung als zusätzliche Spalte
- Tarif und Haupttarife haben ein festes Label
- Teilen sich denselben Nummernsraum als Primärschlüssel
Bearbeitung der Tarifinformationen
Label-Info erzeugen
Bezeichnung für die Haupttarif erzeugen
Label für das Haupttarif erzeugen
IDs für die Haupttarife erstellen
Typ der Beziehung zwische Tarif und Haupttarif erstellen
End of explanation
# Die Nodes für die Tarife
writeDsvFile(df_cat, 'categories', '|', ['HAL_ID', 'CATEGORY', 'LABEL'], ['categoryID:ID(CATEGORY-ID)','name', ':LABEL'])
# Die Nodes für die Haupttarife
writeDsvFile(df_cat, 'parent_categories', '|', ['PARENT_ID', 'PARENT_CATEGORY', 'PARENT_LABEL'], ['categoryID:ID(CATEGORY-ID)','name', ':LABEL'])
# Beziehung zu den Haupttarifen
writeDsvFile(df_cat, 'rel_cat_pcat', '|', ['HAL_ID', 'PARENT_ID', 'REL_TYPE_CAT_PCAT'], [':START_ID(CATEGORY-ID)',':END_ID(CATEGORY-ID)', ':TYPE'])
Explanation: Header für die Tarif-Elemente
categoryID:ID(CATEGORY-ID)|name|:LABEL
categoryID:ID(CATEGORY-ID)|name|:LABEL
Ausgeben der Quelldateien für Neo4J-Import
Tarife
Haupttarife
Beziehung
End of explanation
df_fz = pd.read_csv('./datasets/OPENDATA_VEHICLE_CARSHARING.csv', quotechar='"',encoding ='utf-8', sep=';')
df_fz.head()
Explanation: Lesen der Fahrzeuginformationen
End of explanation
df_fz['LABEL'] = df_fz.apply(getLabel, axis=1, nodeName='VEHICLE')
df_fz.head()
Explanation: Aufnehmen der Fahrzeuginformationen
Die Fahrzeuge haben sehr viele Eigenschaften. Nur einige davon eignen sich auch als Kategorisierungsmerkmal:
- VEHICLE_MODEL_TYPE -> Type
- VEHICLE_MANUFACTURER_NAME -> Marke
- FUEL_TYPE_NAME -> Kraftstoff
Andere Eigenschaften werden als Attribut aufgenommen.
Header für die Fahrzeuginformationen:
vehicleID:ID(VEHICLE-ID)|modelName|modelDetails|vin|registrationPlate|kw:long|fuelType|ownershipType|company|companyGroup|:LABEL
End of explanation
# Die Nodes für die Fahrzeuge
writeDsvFile(df_fz, 'vehicles', '|', ['VEHICLE_HAL_ID', 'VEHICLE_MODEL_NAME', 'VEHICLE_TYPE_NAME', 'VIN', 'REGISTRATION_PLATE', 'KW', 'FUEL_TYPE_NAME', 'OWNERSHIP_TYPE', 'COMPANY', 'COMPANY_GROUP', 'LABEL'], ['vehicleID:ID(VEHICLE-ID)','modelName','modelDetails','vin','registrationPlate','kw:long','fuelType','ownershipType','company','companyGroup',':LABEL'])
Explanation: Ausgeben der Quelldateien für Neo4J-Import
Nur als Node
End of explanation
df_stations = pd.read_csv('./datasets/OPENDATA_RENTAL_ZONE_CARSHARING.csv', quotechar='"',encoding ='utf-8', sep=';')
df_stations.head()
Explanation: Lesen der Stationsinformationen
End of explanation
df_stations['RENTAL_ZONE_HAL_SRC'].unique()
Explanation: Analyse: RENTAL_ZONE_HAL_SRC
Hat die Spalte eine Relevanz?
Da in der Spalte nur ein Wert vorkommt: Nicht
End of explanation
df_stations['COUNTRY'].unique()
Explanation: Analyse: COUNTRY
Hat die Spalte eine Relevanz?
Da in der Spalte nur ein Wert vorkommt: Nicht
Aus dem Grund wird nur die Stadt verwendet
End of explanation
import functools as ft
ft.reduce( \
lambda x, y: x & y , \
map( \
lambda x: len(x.split())==1 \
, df_stations['CITY'].unique()) \
)
list(filter( \
lambda x: len(x.split())>1 \
, df_stations['CITY'].unique() \
))
Explanation: Analyse: CITY
Wie sollte die Spalte verwendet werden?
In der Spalte kommen auch Städte mit Leerzeichen, Minus, Klammer und Slash vor
Zunächst wird die Information als Attribut umgesetzt.
End of explanation
df_stations['TYPE'].unique()
df_stations['LABEL'] = df_stations.apply(getLabel, axis=1, nodeName='RENTAL_ZONE')
df_stations.head()
Explanation: Analyse: TYPE
Hat die Spalte eine Relevanz?
Ja, die Spalte eignet sich als Label
End of explanation
# Die Nodes für die Fahrzeuge
writeDsvFile(df_stations, 'rentalZones', '|', \
['RENTAL_ZONE_HAL_ID', 'NAME', 'CODE', 'TYPE', 'CITY', 'LATITUDE', \
'LONGTITUDE', 'POI_AIRPORT_X', 'POI_LONG_DISTANCE_TRAINS_X', 'POI_SUBURBAN_TRAINS_X', \
'POI_UNDERGROUND_X', 'LABEL']\
, [ 'rentalZoneID:ID(RENTAL-ZONE-ID)', 'name', 'code', 'type', 'city', 'latitude:float', \
'longtitude:float', 'poiAirport', 'poiLongDistanceTrains', 'poiSuburbanTrains', \
'poiUnderground', ':LABEL'])
Explanation: Header für die Station-Elemente
rentalZoneID:ID(RENTAL-ZONE-ID)|name|code|type|city|latitude:float|longtitude:float|poiAirpor|poiLongDistanceTrains|poiSuburbanTrains|poiUnderground|:LABEL
Ausgeben der Quelldateien für Neo4J-Import
Stationen
End of explanation
#df_cat.set_index("HAL_ID", inplace= True)
#df_cat.head()
df_booking = pd.read_csv('./datasets/OPENDATA_BOOKING_CARSHARING.csv', quotechar='"',encoding ='utf-8', sep=';')
df_booking.head()
Explanation: Lesen der Buchungsinformationen
End of explanation
#List unique values in the df['name'] column
df_booking.TECHNICAL_INCOME_CHANNEL.unique()
df_booking['INCOME_CHANNEL_NAME'] = df_booking.agg({'TECHNICAL_INCOME_CHANNEL' : lambda x: trimName(x)})
df_booking.INCOME_CHANNEL_NAME.head()
df_booking.set_index("BOOKING_HAL_ID", inplace= True)
df_booking.head()
Explanation: Aufnehmen der Buchungsinformationen
End of explanation
df_cat_labels = df_cat.drop(['CATEGORY', 'COMPANY', 'COMPANY_GROUP', 'PARENT_LABEL', 'LABEL', 'PARENT_ID', 'REL_TYPE_CAT_PCAT'], axis=1)
df_booking_cat = pd.merge(df_booking, df_cat_labels, left_on='CATEGORY_HAL_ID', right_on='HAL_ID') \
.drop(['CUSTOMER_HAL_ID','HAL_ID','RENTAL_ZONE_HAL_SRC'], axis=1)
df_booking_cat.head()
Explanation: Technical income channel
Möglicher Umgang mit den Informationen über die Quelle (Technical income channel) der Bestellung:
Als Node mit weiteren Informationen/Beziehungen:
- Es liegen Detail-Informationen über die Quelle vor und die Informationen sind fachlich in Bezug auf die Haltung der Datenrelevant. Notwendige Aufwände:
- Integrieren der Detail-Informationen
- ID-Namespace für die Nodes
- Klärung der notwendige Beziehungen
Als Label in einer Buchung:
- Es liegen keine Detail-Informationen über die Quelle vor. Notwendige Aufwände:
- Die Quellbenennungen Label-Fähig machen (Reduktion auf ein Wort)
- Die Labelnamen als zusätzliche Spalte in df_booking aufnehmen.
Referenzierte Informationen in einer Buchung:
CATEGORY_HAL_ID -> Kategorie
VEHICLE_HAL_ID -> Fahrzeug
CUSTOMER_HAL_ID -> Kunde (Datensätze nicht verfügbar)
START_RENTAL_ZONE_HAL_ID -> Abholstation
END_RENTAL_ZONE_HAL_ID -> Rückgabestation
End of explanation |
8,311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 预创建的 Estimators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: 数据集
本文档中的示例程序构建并测试了一个模型,该模型根据花萼和花瓣的大小将鸢尾花分成三种物种。
您将使用鸢尾花数据集训练模型。该数据集包括四个特征和一个标签。这四个特征确定了单个鸢尾花的以下植物学特征:
花萼长度
花萼宽度
花瓣长度
花瓣宽度
根据这些信息,您可以定义一些有用的常量来解析数据:
Step3: 接下来,使用 Keras 与 Pandas 下载并解析鸢尾花数据集。注意为训练和测试保留不同的数据集。
Step4: 通过检查数据您可以发现有四列浮点型特征和一列 int32 型标签。
Step5: 对于每个数据集都分割出标签,模型将被训练来预测这些标签。
Step6: Estimator 编程概述
现在您已经设定好了数据,您可以使用 Tensorflow Estimator 定义模型。Estimator 是从 tf.estimator.Estimator 中派生的任何类。Tensorflow提供了一组tf.estimator(例如,LinearRegressor)来实现常见的机器学习算法。此外,您可以编写您自己的自定义 Estimator。入门阶段我们建议使用预创建的 Estimator。
为了编写基于预创建的 Estimator 的 Tensorflow 项目,您必须完成以下工作:
创建一个或多个输入函数
定义模型的特征列
实例化一个 Estimator,指定特征列和各种超参数。
在 Estimator 对象上调用一个或多个方法,传递合适的输入函数以作为数据源。
我们来看看这些任务是如何在鸢尾花分类中实现的。
创建输入函数
您必须创建输入函数来提供用于训练、评估和预测的数据。
输入函数是一个返回 tf.data.Dataset 对象的函数,此对象会输出下列含两个元素的元组:
features——Python字典,其中:
每个键都是特征名称
每个值都是包含此特征所有值的数组
label 包含每个样本的标签的值的数组。
为了向您展示输入函数的格式,请查看下面这个简单的实现:
Step8: 您的输入函数可以以您喜欢的方式生成 features 字典与 label 列表。但是,我们建议使用 Tensorflow 的 Dataset API,该 API 可以用来解析各种类型的数据。
Dataset API 可以为您处理很多常见情况。例如,使用 Dataset API,您可以轻松地从大量文件中并行读取记录,并将它们合并为单个数据流。
为了简化此示例,我们将使用 pandas 加载数据,并利用此内存数据构建输入管道。
Step9: 定义特征列(feature columns)
特征列(feature columns)是一个对象,用于描述模型应该如何使用特征字典中的原始输入数据。当您构建一个 Estimator 模型的时候,您会向其传递一个特征列的列表,其中包含您希望模型使用的每个特征。tf.feature_column 模块提供了许多为模型表示数据的选项。
对于鸢尾花问题,4 个原始特征是数值,因此我们将构建一个特征列的列表,以告知 Estimator 模型将 4 个特征都表示为 32 位浮点值。故创建特征列的代码如下所示:
Step10: 特征列可能比上述示例复杂得多。您可以从指南获取更多关于特征列的信息。
我们已经介绍了如何使模型表示原始特征,现在您可以构建 Estimator 了。
实例化 Estimator
鸢尾花为题是一个经典的分类问题。幸运的是,Tensorflow 提供了几个预创建的 Estimator 分类器,其中包括:
tf.estimator.DNNClassifier 用于多类别分类的深度模型
tf.estimator.DNNLinearCombinedClassifier 用于广度与深度模型
tf.estimator.LinearClassifier 用于基于线性模型的分类器
对于鸢尾花问题,tf.estimator.DNNClassifier 似乎是最好的选择。您可以这样实例化该 Estimator:
Step11: ## 训练、评估和预测
我们已经有一个 Estimator 对象,现在可以调用方法来执行下列操作:
训练模型。
评估经过训练的模型。
使用经过训练的模型进行预测。
训练模型
通过调用 Estimator 的 Train 方法来训练模型,如下所示:
Step12: 注意将 input_fn 调用封装在 lambda 中以获取参数,同时提供不带参数的输入函数,如 Estimator 所预期的那样。step 参数告知该方法在训练多少步后停止训练。
评估经过训练的模型
现在模型已经经过训练,您可以获取一些关于模型性能的统计信息。代码块将在测试数据上对经过训练的模型的准确率(accuracy)进行评估:
Step14: 与对 train 方法的调用不同,我们没有传递 steps 参数来进行评估。用于评估的 input_fn 只生成一个 epoch 的数据。
eval_result 字典亦包含 average_loss(每个样本的平均误差),loss(每个 mini-batch 的平均误差)与 Estimator 的 global_step(经历的训练迭代次数)值。
利用经过训练的模型进行预测(推理)
我们已经有一个经过训练的模型,可以生成准确的评估结果。我们现在可以使用经过训练的模型,根据一些无标签测量结果预测鸢尾花的品种。与训练和评估一样,我们使用单个函数调用进行预测:
Step15: predict 方法返回一个 Python 可迭代对象,为每个样本生成一个预测结果字典。以下代码输出了一些预测及其概率: | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import pandas as pd
Explanation: 预创建的 Estimators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/estimator/premade"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 tensorFlow.google.cn 上查看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/estimator/premade.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 中运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/estimator/premade.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 GitHub 上查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/estimator/premade.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载 notebook</a>
</td>
</table>
Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的
官方英文文档。如果您有改进此翻译的建议, 请提交 pull request 到
tensorflow/docs GitHub 仓库。要志愿地撰写或者审核译文,请加入
docs-zh-cn@tensorflow.org Google Group。
本教程将向您展示如何使用 Estimators 解决 Tensorflow 中的鸢尾花(Iris)分类问题。Estimator 是 Tensorflow 完整模型的高级表示,它被设计用于轻松扩展和异步训练。更多细节请参阅 Estimators。
请注意,在 Tensorflow 2.0 中,Keras API 可以完成许多相同的任务,而且被认为是一个更易学习的API。如果您刚刚开始入门,我们建议您从 Keras 开始。有关 Tensorflow 2.0 中可用高级API的更多信息,请参阅 Keras标准化。
首先要做的事
为了开始,您将首先导入 Tensorflow 和一系列您需要的库。
End of explanation
CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']
Explanation: 数据集
本文档中的示例程序构建并测试了一个模型,该模型根据花萼和花瓣的大小将鸢尾花分成三种物种。
您将使用鸢尾花数据集训练模型。该数据集包括四个特征和一个标签。这四个特征确定了单个鸢尾花的以下植物学特征:
花萼长度
花萼宽度
花瓣长度
花瓣宽度
根据这些信息,您可以定义一些有用的常量来解析数据:
End of explanation
train_path = tf.keras.utils.get_file(
"iris_training.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv")
test_path = tf.keras.utils.get_file(
"iris_test.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv")
train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)
Explanation: 接下来,使用 Keras 与 Pandas 下载并解析鸢尾花数据集。注意为训练和测试保留不同的数据集。
End of explanation
train.head()
Explanation: 通过检查数据您可以发现有四列浮点型特征和一列 int32 型标签。
End of explanation
train_y = train.pop('Species')
test_y = test.pop('Species')
# 标签列现已从数据中删除
train.head()
Explanation: 对于每个数据集都分割出标签,模型将被训练来预测这些标签。
End of explanation
def input_evaluation_set():
features = {'SepalLength': np.array([6.4, 5.0]),
'SepalWidth': np.array([2.8, 2.3]),
'PetalLength': np.array([5.6, 3.3]),
'PetalWidth': np.array([2.2, 1.0])}
labels = np.array([2, 1])
return features, labels
Explanation: Estimator 编程概述
现在您已经设定好了数据,您可以使用 Tensorflow Estimator 定义模型。Estimator 是从 tf.estimator.Estimator 中派生的任何类。Tensorflow提供了一组tf.estimator(例如,LinearRegressor)来实现常见的机器学习算法。此外,您可以编写您自己的自定义 Estimator。入门阶段我们建议使用预创建的 Estimator。
为了编写基于预创建的 Estimator 的 Tensorflow 项目,您必须完成以下工作:
创建一个或多个输入函数
定义模型的特征列
实例化一个 Estimator,指定特征列和各种超参数。
在 Estimator 对象上调用一个或多个方法,传递合适的输入函数以作为数据源。
我们来看看这些任务是如何在鸢尾花分类中实现的。
创建输入函数
您必须创建输入函数来提供用于训练、评估和预测的数据。
输入函数是一个返回 tf.data.Dataset 对象的函数,此对象会输出下列含两个元素的元组:
features——Python字典,其中:
每个键都是特征名称
每个值都是包含此特征所有值的数组
label 包含每个样本的标签的值的数组。
为了向您展示输入函数的格式,请查看下面这个简单的实现:
End of explanation
def input_fn(features, labels, training=True, batch_size=256):
An input function for training or evaluating
# 将输入转换为数据集。
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))
# 如果在训练模式下混淆并重复数据。
if training:
dataset = dataset.shuffle(1000).repeat()
return dataset.batch(batch_size)
Explanation: 您的输入函数可以以您喜欢的方式生成 features 字典与 label 列表。但是,我们建议使用 Tensorflow 的 Dataset API,该 API 可以用来解析各种类型的数据。
Dataset API 可以为您处理很多常见情况。例如,使用 Dataset API,您可以轻松地从大量文件中并行读取记录,并将它们合并为单个数据流。
为了简化此示例,我们将使用 pandas 加载数据,并利用此内存数据构建输入管道。
End of explanation
# 特征列描述了如何使用输入。
my_feature_columns = []
for key in train.keys():
my_feature_columns.append(tf.feature_column.numeric_column(key=key))
Explanation: 定义特征列(feature columns)
特征列(feature columns)是一个对象,用于描述模型应该如何使用特征字典中的原始输入数据。当您构建一个 Estimator 模型的时候,您会向其传递一个特征列的列表,其中包含您希望模型使用的每个特征。tf.feature_column 模块提供了许多为模型表示数据的选项。
对于鸢尾花问题,4 个原始特征是数值,因此我们将构建一个特征列的列表,以告知 Estimator 模型将 4 个特征都表示为 32 位浮点值。故创建特征列的代码如下所示:
End of explanation
# 构建一个拥有两个隐层,隐藏节点分别为 30 和 10 的深度神经网络。
classifier = tf.estimator.DNNClassifier(
feature_columns=my_feature_columns,
# 隐层所含结点数量分别为 30 和 10.
hidden_units=[30, 10],
# 模型必须从三个类别中做出选择。
n_classes=3)
Explanation: 特征列可能比上述示例复杂得多。您可以从指南获取更多关于特征列的信息。
我们已经介绍了如何使模型表示原始特征,现在您可以构建 Estimator 了。
实例化 Estimator
鸢尾花为题是一个经典的分类问题。幸运的是,Tensorflow 提供了几个预创建的 Estimator 分类器,其中包括:
tf.estimator.DNNClassifier 用于多类别分类的深度模型
tf.estimator.DNNLinearCombinedClassifier 用于广度与深度模型
tf.estimator.LinearClassifier 用于基于线性模型的分类器
对于鸢尾花问题,tf.estimator.DNNClassifier 似乎是最好的选择。您可以这样实例化该 Estimator:
End of explanation
# 训练模型。
classifier.train(
input_fn=lambda: input_fn(train, train_y, training=True),
steps=5000)
Explanation: ## 训练、评估和预测
我们已经有一个 Estimator 对象,现在可以调用方法来执行下列操作:
训练模型。
评估经过训练的模型。
使用经过训练的模型进行预测。
训练模型
通过调用 Estimator 的 Train 方法来训练模型,如下所示:
End of explanation
eval_result = classifier.evaluate(
input_fn=lambda: input_fn(test, test_y, training=False))
print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
Explanation: 注意将 input_fn 调用封装在 lambda 中以获取参数,同时提供不带参数的输入函数,如 Estimator 所预期的那样。step 参数告知该方法在训练多少步后停止训练。
评估经过训练的模型
现在模型已经经过训练,您可以获取一些关于模型性能的统计信息。代码块将在测试数据上对经过训练的模型的准确率(accuracy)进行评估:
End of explanation
# 由模型生成预测
expected = ['Setosa', 'Versicolor', 'Virginica']
predict_x = {
'SepalLength': [5.1, 5.9, 6.9],
'SepalWidth': [3.3, 3.0, 3.1],
'PetalLength': [1.7, 4.2, 5.4],
'PetalWidth': [0.5, 1.5, 2.1],
}
def input_fn(features, batch_size=256):
An input function for prediction.
# 将输入转换为无标签数据集。
return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size)
predictions = classifier.predict(
input_fn=lambda: input_fn(predict_x))
Explanation: 与对 train 方法的调用不同,我们没有传递 steps 参数来进行评估。用于评估的 input_fn 只生成一个 epoch 的数据。
eval_result 字典亦包含 average_loss(每个样本的平均误差),loss(每个 mini-batch 的平均误差)与 Estimator 的 global_step(经历的训练迭代次数)值。
利用经过训练的模型进行预测(推理)
我们已经有一个经过训练的模型,可以生成准确的评估结果。我们现在可以使用经过训练的模型,根据一些无标签测量结果预测鸢尾花的品种。与训练和评估一样,我们使用单个函数调用进行预测:
End of explanation
for pred_dict, expec in zip(predictions, expected):
class_id = pred_dict['class_ids'][0]
probability = pred_dict['probabilities'][class_id]
print('Prediction is "{}" ({:.1f}%), expected "{}"'.format(
SPECIES[class_id], 100 * probability, expec))
Explanation: predict 方法返回一个 Python 可迭代对象,为每个样本生成一个预测结果字典。以下代码输出了一些预测及其概率:
End of explanation |
8,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vector Math
In this notebook we'll demo that word2vec-like properties are kept. You can download the vectors, follow along at home, and make your own queries if you'd like.
Sums
Step1: You don't need to run the code below unless you've trained your own model. Otherwise, just download the word vectors from the URL above.
Step2: L2 Normalize the word vectors
Step3: Queen is several rankings down, so not exactly the same as out of the box word2vec! | Python Code:
!wget https://zenodo.org/record/49903/files/vocab.npy
!wget https://zenodo.org/record/49903/files/word_vectors.npy
Explanation: Vector Math
In this notebook we'll demo that word2vec-like properties are kept. You can download the vectors, follow along at home, and make your own queries if you'd like.
Sums:
silicon valley ~ california + technology
uber ~ taxis + company
baidu ~ china + search engine
Analogies:
Mark Zuckerberg - Facebook + Amazon = Jeff Bezos
Hacker News - story + article = StackOverflow
VIM - terminal + graphics = Photoshop
And slightly more whimsically:
vegeables - eat + drink = tea
scala - features + simple = haskell
End of explanation
#from lda2vec_model import LDA2Vec
#from chainer import serializers
#import numpy as np
#import pandas as pd
#import pickle
#
#features = pd.read_pickle("../data/features.pd")
#vocab = np.load("../data/vocab")
#npz = np.load(open('topics.story.pyldavis.npz', 'r'))
#dat = {k: v for (k, v) in npz.iteritems()}
#vocab = dat['vocab'].tolist()
#dat = np.load("../data/data.npz")
#n_stories = features.story_id_codes.max() + 1
#n_units = 256
#n_vocab = dat['flattened'].max() + 1
#model = LDA2Vec(n_stories=n_stories, n_story_topics=40,
# n_authors=5664, n_author_topics=20,
# n_units=n_units, n_vocab=n_vocab, counts=np.zeros(n_vocab),
# n_samples=15)
#serializers.load_hdf5("/home/chris/lda2vec-12/examples/hacker_news/lda2vec/lda2vec.hdf5", model)
#np.save("word_vectors", model.sampler.W.data)
#np.save("vocab", vocab)
import numpy as np
word_vectors_raw = np.load("word_vectors.npy")
vocab = np.load("vocab.npy").tolist()
Explanation: You don't need to run the code below unless you've trained your own model. Otherwise, just download the word vectors from the URL above.
End of explanation
word_vectors = word_vectors_raw / np.linalg.norm(word_vectors_raw, axis=-1)[:, None]
def get_vector(token):
index = vocab.index(token)
return word_vectors[index, :].copy()
def most_similar(token, n=20):
word_vector = get_vector(token)
similarities = np.dot(word_vectors, word_vector)
top = np.argsort(similarities)[::-1][:n]
return [vocab[i] for i in top]
# This is Levy & Goldberg's 3Cosmul Metric
# Based on the Gensim implementation: https://github.com/piskvorky/gensim/blob/master/gensim/models/word2vec.py
def cosmul(positives, negatives, topn=20):
positive = [get_vector(p) for p in positives]
negative = [get_vector(n) for n in negatives]
pos_dists = [((1 + np.dot(word_vectors, term)) / 2.) for term in positive]
neg_dists = [((1 + np.dot(word_vectors, term)) / 2.) for term in negative]
dists = np.prod(pos_dists, axis=0) / (np.prod(neg_dists, axis=0) + 1e-6)
idxs = np.argsort(dists)[::-1][:topn]
return [vocab[i] for i in idxs if (vocab[i] not in positives) and (vocab[i] not in negatives)]
def most_similar_posneg(positives, negatives, topn=20):
positive = np.sum([get_vector(p) for p in positives], axis=0)
negative = np.sum([get_vector(n) for n in negatives], axis=0)
vector = positive - negative
dists = np.dot(word_vectors, vector)
idxs = np.argsort(dists)[::-1][:topn]
return [vocab[i] for i in idxs if (vocab[i] not in positives) and (vocab[i] not in negatives)]
most_similar('san francisco')
cosmul(['california', 'technology'], [], topn=20)
cosmul(['digital', 'currency'], [], topn=20)
cosmul(['text editor', 'terminal'], [], topn=20)
cosmul(['china'], [], topn=20)
cosmul(['china', 'search engine'], [], topn=20)
cosmul(['microsoft'], [], topn=20)
cosmul(['microsoft', 'cloud'], [], topn=20)
Explanation: L2 Normalize the word vectors
End of explanation
cosmul(['king', 'woman'], ['man'], topn=20)
print 'Most similar'
print '\n'.join(most_similar('mark zuckerberg'))
print '\nCosmul'
pos = ['mark zuckerberg', 'amazon']
neg = ['facebook']
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['hacker news', 'question']
neg = ['story']
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
print '\nCosmul'
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['san francisco']
neg = []
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
print '\nCosmul'
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['nlp', 'image']
neg = ['text']
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
print '\nCosmul'
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['vim', 'graphics']
neg = ['terminal']
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
print '\nCosmul'
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['vegetables', 'drink']
neg = ['eat']
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
print '\nCosmul'
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['lda', '']
neg = ['']
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
Explanation: Queen is several rankings down, so not exactly the same as out of the box word2vec!
End of explanation |
8,313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 3)
Determining Important Nodes
There are a number of ways to measure the importance of nodes in a network. Possibly the easiest is the degree, i.e. the number of neighbors. In a social network, for example, a person who knows many others could be an important person. However, is this notion really meaningful? Probably not, since it does not consider the importance of the neighbors. Also, there is an interesting effect in social networks with respect to neighborhood sizes. Let us investigate this effect a little bit
Step2: Thus, 86% of the users in this network have fewer friends than their friends have on average. While this result cannot be generalized exactly like this to other networks, the qualitative effect is often seen in social (and other) networks. Thus, let us now consider measures that broaden the rather narrow scope of the degree.
$k$-core Decomposition
Thus, the next concept we consider is the $k$-core decomposition. To answer the following Q&A session, go back to the lecture slides.
Q&A Session #5
What is the definition of an $i$-core? (Note that $k$ is often used for the largest core only!)
Answer
Step3: Centrality Measures
The $k$-core decomposition is rather, as the name suggests, a decomposition of the vertices into discrete subsets. Nodes with the same coreness (i.e. in the same shell) have equal importance. Rankings where many vertices are equally important are often not very meaningful. That is why the $k$-core decomposition should not be interpreted as a fine-grained ranking mechanism.
Q&A Session #6
Take the Facebook graph MIT8 and find the most central nodes. Take the relevance of their neighbors into account. Consider that MIT8 models a social network, not a web graph. Which algorithm would you choose? (Hint | Python Code:
%matplotlib inline
from networkit import *
import matplotlib.pyplot as plt
cd ~/Documents/workspace/NetworKit
%matplotlib inline
G = readGraph("input/MIT8.edgelist", Format.EdgeListTabZero)
def avgFriendDegree(v):
Calculate the average degree of the neighbors of a node
degSum = 0
for u in G.neighbors(v):
degSum += G.degree(u)
return degSum / G.degree(v)
# Code for 3-3) and 3-4)
maxDeg = sorted(centrality.DegreeCentrality(G).run().scores(), reverse=True)[1]
plt.figure(figsize=(10,6))
plt.xlim(0, maxDeg)
plt.ylim(0, maxDeg)
plt.xlabel("number of friends")
plt.ylabel("average number of friends of friends")
plt.scatter(x=[G.degree(v) for v in G.nodes()], y=[avgFriendDegree(v) for v in G.nodes()], s=0.2,)
count = 0 # count the number of persons whose friends have on average more friends
count = 0 # count the number of persons whose friends have on average more friends
for v in G.nodes():
if G.degree(v) < avgFriendDegree(v):
count += 1
print("4-4): ", count / G.numberOfNodes())
Explanation: Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 3)
Determining Important Nodes
There are a number of ways to measure the importance of nodes in a network. Possibly the easiest is the degree, i.e. the number of neighbors. In a social network, for example, a person who knows many others could be an important person. However, is this notion really meaningful? Probably not, since it does not consider the importance of the neighbors. Also, there is an interesting effect in social networks with respect to neighborhood sizes. Let us investigate this effect a little bit:
Q&A Session #4
Do you think your number of online friends is above/below/on average? (You do not have to answer this question openly.)
Answer (may be secret):
What do you expect: How many people (in percent) in a social network have fewer friends than their friends on average?
Answer (choose one):
a) 0 - 25%
b) 26 - 50%
c) 51 - 75%
d) 76 - 100%
Use the Facebook graph. Compute for each vertex the average degree of its neighbors.
Answer:
Count the number of persons whose friends have on average more friends. What is their percentage in this network?
Answer:
End of explanation
# Code for 5-3)
mit8 = readGraph("input/MIT8.edgelist", Format.EdgeListTabZero)
airf1 = readGraph("input/airfoil1.graph", Format.METIS)
gen = generators.ErdosRenyiGenerator(1000, 0.01)
er1000 = gen.generate()
for g in {mit8, airf1, er1000}:
print(g.toString())
coreDec = centrality.CoreDecomposition(g)
coreDec.run()
print(set(coreDec.scores()))
Explanation: Thus, 86% of the users in this network have fewer friends than their friends have on average. While this result cannot be generalized exactly like this to other networks, the qualitative effect is often seen in social (and other) networks. Thus, let us now consider measures that broaden the rather narrow scope of the degree.
$k$-core Decomposition
Thus, the next concept we consider is the $k$-core decomposition. To answer the following Q&A session, go back to the lecture slides.
Q&A Session #5
What is the definition of an $i$-core? (Note that $k$ is often used for the largest core only!)
Answer:
Why do you think it can be considered a more robust measure for importance compared to the degree?
Answer:
Compute the $k$-core decomposition of the three networks used before. Then print the non-empty $i$-shells by using the method scores(). What results (similarities/differences) do you expect? Are these expectations met by the results?
Answer:
What disadvantage do you see when using $k$-core decomposition to rate nodes according to their importance?
Answer:
End of explanation
# Code for 6-1) and 6-2)
evc = centrality.EigenvectorCentrality(mit8)
evc.run()
evc.ranking()[:15]
Explanation: Centrality Measures
The $k$-core decomposition is rather, as the name suggests, a decomposition of the vertices into discrete subsets. Nodes with the same coreness (i.e. in the same shell) have equal importance. Rankings where many vertices are equally important are often not very meaningful. That is why the $k$-core decomposition should not be interpreted as a fine-grained ranking mechanism.
Q&A Session #6
Take the Facebook graph MIT8 and find the most central nodes. Take the relevance of their neighbors into account. Consider that MIT8 models a social network, not a web graph. Which algorithm would you choose? (Hint: Look at the lecture slides!)
Answer:
What are the 15 most important nodes according to the method in 1)?
Answer:
What other centrality measures do you recall?
Answer:
After you answered the questions, proceed with Tutorial #4.
End of explanation |
8,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vector Space Model
Adapted from this blog post, written by Allen Riddell.
One of the benefits of the DTM is that it allows us to think about text within the bounds of geometry, which then allows us to think about the "distance" between texts. Today's tutorial will explore how we might use distance measures in our text analysis workflow, and toward what end.
Learning Goals
Gain an intuition about how we might think about, and measure, the distance between texts
Learn how to measure distances using scikit-learn
Learn how to visualize distances in a few ways, and how that might help us in our text analysis project
Learn more about the flexibilities and range of tools in scikit-learn
Outline
<ol start="0">
<li>[Vectorizing our text
Step1: <a id='compare'></a>
1. Comparing texts
Arranging our texts in a document-term matrix make available a range of exploratory procedures. For example, calculating a measure of similarity between texts becomes simple. Since each row of the document-term matrix is a sequence of a novel’s word frequencies, it is possible to put mathematical notions of similarity (or distance) between sequences of numbers in service of calculating the similarity (or distance) between any two novels. One frequently used measure of distance between vectors (a measure easily converted into a measure of similarity) is Euclidean distance. The Euclidean distance between two vectors in the plane should be familiar from geometry, as it is the length of the hypotenuse that joins the two vectors. For instance, consider the Euclidean distance between the vectors \begin{align}
\overrightarrow{x}=(1,3) \space \space and\space\space\overrightarrow{y}=(4,2) \end{align}
the Euclidian distance can be calculated as follows
Step2: And if we want to use a measure of distance that takes into consideration the length of the novels (an excellent idea), we can calculate the cosine similarity by importing sklearn.metrics.pairwise.cosine_similarity and use it in place of euclidean_distances.
Cosine similarity measure the angle between two vectors
Step3: <a id='visual'></a>
2. Visualizing distances
It is often desirable to visualize the pairwise distances between our texts. A general approach to visualizing distances is to assign a point in a plane to each text, making sure that the distance between points is proportional to the pairwise distances we calculated. This kind of visualization is common enough that it has a name, “multidimensional scaling” (MDS) and family of functions in scikit-learn.
Step4: <a id='cluster'></a>
3. Clustering texts based on distance
Clustering texts into discrete groups of similar texts is often a useful exploratory step. For example, a researcher may be wondering if certain textual features partition a collection of texts by author or by genre. Pairwise distances alone do not produce any kind of classification. To put a set of distance measurements to work in classification requires additional assumptions, such as a definition of a group or cluster.
The ideas underlying the transition from distances to clusters are, for the most part, common sense. Any clustering of texts should result in texts that are closer to each other (in the distance matrix) residing in the same cluster. There are many ways of satisfying this requirement; there no unique clustering based on distances that is the “best”. One strategy for clustering in circulation is called Ward’s method. Rather than producing a single clustering, Ward’s method produces a hierarchy of clusterings, as we will see in a moment. All that Ward’s method requires is a set of pairwise distance measurements–such as those we calculated a moment ago. Ward’s method produces a hierarchical clustering of texts via the following procedure
Step5: <a id='kmeans'></a>
4. K-Means Clustering
From the dendrogram above, we might expect these four novels to have two clusters
Step6: <a id='exercise'></a>
Exercise | Python Code:
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
filenames = ['../Data/Alcott_GarlandForGirls.txt',
'../Data/Austen_PrideAndPrejudice.txt',
'../Data/Machiavelli_ThePrince.txt',
'../Data/Marx_CommunistManifesto.txt']
vectorizer = CountVectorizer(input='filename', encoding='utf-8',stop_words='english') #filname input, which bypases reading in files
dtm = vectorizer.fit_transform(filenames) # a sparse matrix
vocab = vectorizer.get_feature_names() # a list
dtm
dtm = dtm.toarray() # convert to a regular, dense array
vocab = np.array(vocab)
dtm
Explanation: Vector Space Model
Adapted from this blog post, written by Allen Riddell.
One of the benefits of the DTM is that it allows us to think about text within the bounds of geometry, which then allows us to think about the "distance" between texts. Today's tutorial will explore how we might use distance measures in our text analysis workflow, and toward what end.
Learning Goals
Gain an intuition about how we might think about, and measure, the distance between texts
Learn how to measure distances using scikit-learn
Learn how to visualize distances in a few ways, and how that might help us in our text analysis project
Learn more about the flexibilities and range of tools in scikit-learn
Outline
<ol start="0">
<li>[Vectorizing our text: The Sparse DTM to Numpy Array](#vector)</li>
<li>[Comparing Texts](#compare)</li>
<li>[Visualizing Distance](#visual)</li>
<li>[Clustering Text based on Distance Metrics (if time)](#cluster)</li>
<li>[K-Means Clustering (if time)](#kmeans)</li>
</ol>
Key Terms
Euclidean Distance
In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" (i.e. straight-line) distance between two points in Euclidean space. With this distance, Euclidean space becomes a metric space.
Cosine Similarity
Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. The cosine of 0° is 1, and it is less than 1 for any other angle.
Multidimensional Scaling
Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a dataset. It refers to a set of related ordination techniques used in information visualization, in particular to display the information contained in a distance matrix.
Dendrogram
A dendrogram (from Greek dendro "tree" and gramma "drawing") is a tree diagram frequently used to illustrate the arrangement of the clusters produced by hierarchical clustering.
K-Means Clustering
k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster.
<a id='vector'></a>
0. From DTM to Numpy Array
First, let's create our DTM, and then turn it from a sparse matrix to a regular (dense) array.
We'll use a different input option than we have, an option called filename.
End of explanation
from sklearn.metrics.pairwise import euclidean_distances
euc_dist = euclidean_distances(dtm)
print(filenames[1])
print(filenames[2])
print("\nDistance between Austen and Machiavelli:")
# the distance between Austen and Machiavelli
print(euc_dist[1, 2])
# which is greater than the distance between *Austen* and *Alcott* (index 0)
print("\nDistance between Austen and Machiavelli is greater than the distance between Austen and Alcott:")
euc_dist[1, 2] > euc_dist[0, 1]
Explanation: <a id='compare'></a>
1. Comparing texts
Arranging our texts in a document-term matrix make available a range of exploratory procedures. For example, calculating a measure of similarity between texts becomes simple. Since each row of the document-term matrix is a sequence of a novel’s word frequencies, it is possible to put mathematical notions of similarity (or distance) between sequences of numbers in service of calculating the similarity (or distance) between any two novels. One frequently used measure of distance between vectors (a measure easily converted into a measure of similarity) is Euclidean distance. The Euclidean distance between two vectors in the plane should be familiar from geometry, as it is the length of the hypotenuse that joins the two vectors. For instance, consider the Euclidean distance between the vectors \begin{align}
\overrightarrow{x}=(1,3) \space \space and\space\space\overrightarrow{y}=(4,2) \end{align}
the Euclidian distance can be calculated as follows:
\begin{align}
\sqrt{(1-4)^2 + (3-2)^2} = \sqrt{10}
\end{align}
Note
Measures of distance can be converted into measures of similarity. If your measures of distance are all between zero and one, then a measure of similarity could be one minus the distance. (The inverse of the distance would also serve as a measure of similarity.)
Distance between two vectors:
Note
More generally, given two vectors \begin{align} \overrightarrow{x} \space \space and\space\space\overrightarrow{y}\end{align}
in p-dimensional space, the Euclidean distance between the two vectors is given by
\begin{align}
||\overrightarrow{x} −\overrightarrow{y}||=\sqrt{\sum_{i=1}^P (x_i−y_i)^2}
\end{align}
This concept of distance is not restricted to two dimensions. For example, it is not difficult to imagine the figure above translated into three dimensions. We can also persuade ourselves that the measure of distance extends to an arbitrary number of dimensions; for any two matched components in a pair of vectors (such as x<sub>2</sub> and y<sub>2</sub>), differences increase the distance.
Since two novels in our corpus now have an expression as vectors, we can calculate the Euclidean distance between them. We can do this by hand or we can avail ourselves of the scikit-learn function euclidean_distances.
A challenge for you: calculate Euclidean distance of sample texts by hand.
End of explanation
from sklearn.metrics.pairwise import cosine_similarity
cos_dist = 1 - cosine_similarity(dtm)
np.round(cos_dist, 2)
##EX:
## 1. Print the cosine distance between Austen and Machiavelli
## 2. Is this distance greater or less than the distance between Austen and Alcott?
print(cos_dist[1, 2])
# which is greater than the distance between *Austen* and
# *Alcott* (index 0)
cos_dist[1, 2] > cos_dist[1, 0]
Explanation: And if we want to use a measure of distance that takes into consideration the length of the novels (an excellent idea), we can calculate the cosine similarity by importing sklearn.metrics.pairwise.cosine_similarity and use it in place of euclidean_distances.
Cosine similarity measure the angle between two vectors:
Question: How does length factor into these two equations?
Keep in mind that cosine similarity is a measure of similarity (rather than distance) that ranges between 0 and 1 (as it is the cosine of the angle between the two vectors). In order to get a measure of distance (or dissimilarity), we need to “flip” the measure so that a larger angle receives a larger value. The distance measure derived from cosine similarity is therefore one minus the cosine similarity between two vectors.
End of explanation
import os # for os.path.basename
import matplotlib.pyplot as plt
from sklearn.manifold import MDS
# two components as we're plotting points in a two-dimensional plane
# "precomputed" because we provide a distance matrix
# we will also specify `random_state` so the plot is reproducible.
mds = MDS(n_components=2, dissimilarity="precomputed", random_state=1)
pos = mds.fit_transform(euc_dist) # shape (n_components, n_samples)
xs, ys = pos[:, 0], pos[:, 1]
# short versions of filenames:
# convert 'data/austen-brontë/Austen_Emma.txt' to 'Austen_Emma'
names = [os.path.basename(fn).replace('.txt', '') for fn in filenames]
for x, y, name in zip(xs, ys, names):
plt.scatter(x, y)
plt.text(x, y, name)
plt.show()
Explanation: <a id='visual'></a>
2. Visualizing distances
It is often desirable to visualize the pairwise distances between our texts. A general approach to visualizing distances is to assign a point in a plane to each text, making sure that the distance between points is proportional to the pairwise distances we calculated. This kind of visualization is common enough that it has a name, “multidimensional scaling” (MDS) and family of functions in scikit-learn.
End of explanation
from scipy.cluster.hierarchy import ward, dendrogram
linkage_matrix = ward(euc_dist)
dendrogram(linkage_matrix, orientation="right", labels=names)
plt.tight_layout() # fixes margins
plt.show()
Explanation: <a id='cluster'></a>
3. Clustering texts based on distance
Clustering texts into discrete groups of similar texts is often a useful exploratory step. For example, a researcher may be wondering if certain textual features partition a collection of texts by author or by genre. Pairwise distances alone do not produce any kind of classification. To put a set of distance measurements to work in classification requires additional assumptions, such as a definition of a group or cluster.
The ideas underlying the transition from distances to clusters are, for the most part, common sense. Any clustering of texts should result in texts that are closer to each other (in the distance matrix) residing in the same cluster. There are many ways of satisfying this requirement; there no unique clustering based on distances that is the “best”. One strategy for clustering in circulation is called Ward’s method. Rather than producing a single clustering, Ward’s method produces a hierarchy of clusterings, as we will see in a moment. All that Ward’s method requires is a set of pairwise distance measurements–such as those we calculated a moment ago. Ward’s method produces a hierarchical clustering of texts via the following procedure:
Start with each text in its own cluster
Until only a single cluster remains,
Find the closest clusters and merge them. The distance between two clusters is the change in the sum of squared distances when they are merged.
Return a tree containing a record of cluster-merges.
The function scipy.cluster.hierarchy.ward performs this algorithm and returns a tree of cluster-merges. The hierarchy of clusters can be visualized using scipy.cluster.hierarchy.dendrogram.
End of explanation
from sklearn.cluster import KMeans
km = KMeans(n_clusters=2, random_state=0)
clusters = km.fit(dtm)
clusters.labels_
list(zip(filenames, clusters.labels_))
print("Top terms per cluster:")
order_centroids = clusters.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(2):
print("Cluster %d:" % i,)
for ind in order_centroids[i, :20]:
print(' %s' % terms[ind],)
print()
Explanation: <a id='kmeans'></a>
4. K-Means Clustering
From the dendrogram above, we might expect these four novels to have two clusters: Austen and Alcott, and Machiavelli and Marx.
Let's see if this is the case using k-means clustering, which clusters on Euclidean distance.
End of explanation
text0 = 'I like to eat broccoli and bananas.'
text1 = 'I ate a banana and spinach smoothie for breakfast.'
text2 = 'Chinchillas and kittens are cute.'
text3 = 'My sister adopted a kitten yesterday.'
text4 = 'Look at this cute hamster munching on a piece of broccoli.'
text_list = [text0, text1, text2, text3, text4]
#create vector for text "names"
names = ['eat', 'smoothie', 'chinchillas', 'adopted', 'munching']
#solution
ex_vectorizer = CountVectorizer(stop_words='english')
ex_dtm = ex_vectorizer.fit_transform(text_list) # a sparse matrix
vocab = ex_vectorizer.get_feature_names() # a list
ex_dtm = ex_dtm.toarray()
ex_dtm
ex_euc_dist = euclidean_distances(ex_dtm)
print(text_list[0])
print(text_list[1])
print(text_list[2])
print(ex_euc_dist[0, 2])
ex_euc_dist[0, 2] > ex_euc_dist[0, 1]
ex_cos_dist = 1 - cosine_similarity(ex_dtm)
print(np.round(ex_cos_dist, 2))
print(ex_cos_dist[0,2])
ex_cos_dist[0,2] > ex_cos_dist[0,1]
linkage_matrix = ward(ex_euc_dist)
dendrogram(linkage_matrix, orientation="right", labels=names)
plt.tight_layout() # fixes margins
plt.show()
from nltk.stem.porter import PorterStemmer
import re
porter_stemmer = PorterStemmer()
#remove punctuation
text_list = [re.sub("[,.]", "", sentence) for sentence in text_list]
#stem words
text_list_stemmed = [' '.join([porter_stemmer.stem(word) for word in sentence.split(" ")]) for sentence in text_list]
text_list_stemmed
dtm_stem = ex_vectorizer.fit_transform(text_list_stemmed)
ex_dist = 1 - cosine_similarity(dtm_stem)
print(np.round(ex_dist, 2))
print(ex_dist[0,2])
print(ex_dist[0,1])
ex_dist[0,2] > ex_dist[0,1]
linkage_matrix = ward(ex_dist)
dendrogram(linkage_matrix, orientation="right", labels=names)
plt.tight_layout() # fixes margins
plt.show()
print(text_list[0])
print(text_list[1])
print(text_list[2])
print(text_list[3])
print(text_list[4])
Explanation: <a id='exercise'></a>
Exercise:
Find the Euclidian distance and cosine distance for the 5 sentences below. Do the distance measures make sense?
Visualize the potential clusters using a dendrogram. Do the clusters make sense?
How might we make the clusters better?
End of explanation |
8,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This example demonstraites how to convert Caffe pretrained ResNet-50 model from https
Step1: We need a lot of building blocks from Lasagne to build network
Step2: Helper modules, some of them will help us to download images and plot them
Step3: Build Lasagne model
BatchNormalization issue in caffe
Caffe doesn't have correct BN layer as described in https
Step4: We can increase, decrease or keep same dimensionality of data using such blocks. In ResNet-50 only several transformation are used.
Keep shape with 1x1 convolution
We can apply nonlinearity transformation from (None, 64, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the origin of a network after first pool layer)
Step5: Keep shape with 3x3 convolution
Also we can apply nonlinearity transformation from (None, 64, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the middle of any residual blocks)
Step6: Increase shape using number of filters
We can nonlinearly increase shape from (None, 64, 56, 56) to (None, 256, 56, 56) if we apply simple block with following parameters (look at the last simple block of any risidual block)
Step7: Increase shape using number of filters
We can nonlinearly decrease shape from (None, 256, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the first simple block of any risidual block without left branch)
Step8: Increase shape using number of filters
We can also nonlinearly decrease shape from (None, 256, 56, 56) to (None, 128, 28, 28) if we apply simple block with following parameters (look at the first simple block of any risidual block with left branch)
Step10: Following function creates simple block
Step12: Residual blocks
ResNet also contains several residual blockes built from simple blocks, each of them have two branches; left branch sometimes contains simple block, sometimes not. Each block ends with Elementwise sum layer followed by ReLu nonlinearity.
http
Step13: Gathering everighting together
Create head of the network (everithing before first residual block)
Step14: Create four groups of residual blocks
Step15: Create tail of the network (everighting after last resudual block)
Step16: Transfer weights from caffe to lasagne
Load pretrained caffe model
Step17: Copy weights
There is one more issue with BN layer
Step18: Testing
Read ImageNet synset
Step19: Download some image urls for recognition
Step20: Load mean values
Step21: Image loader
Step22: Lets take five images and compare prediction of Lasagne with Caffe | Python Code:
import caffe
Explanation: Introduction
This example demonstraites how to convert Caffe pretrained ResNet-50 model from https://github.com/KaimingHe/deep-residual-networks (firstly described in http://arxiv.org/pdf/1512.03385v1.pdf) into Theano/Lasagne format.
We will create a set of Lasagne layers corresponding to the Caffe model specification (prototxt), then copy the parameters from the caffemodel file into our model (like <a href="https://github.com/Lasagne/Recipes/blob/master/examples/Using%20a%20Caffe%20Pretrained%20Network%20-%20CIFAR10.ipynb">here</a>).
This notebook produce resnet50.pkl file, which contains dictionary with following foelds:
* values: numpy array with parameters of the model
* synset_words: labels of classes
* mean_image: mean image which should be subtracted from each input image
This file can be used for initialization of weights of the model created by modelzoo/resnet50.py.
License
Same as in parent project https://github.com/KaimingHe/deep-residual-networks/blob/master/LICENSE
Requirements
Download the required files
<a href="https://onedrive.live.com/?authkey=%21AAFW2-FVoxeVRck&id=4006CBB8476FF777%2117887&cid=4006CBB8476FF777">Here</a> you can find folder with caffe/proto files, we need followings to be stored in ./:
* ResNet-50-deploy.prototxt contains architecture of ResNet-50 in proto format
* ResNet-50-model.caffemodel is proto serialization of model parameters
* ResNet_mean.binaryproto contains mean values
Imports
We need caffe to load weights and compare results
End of explanation
import lasagne
from lasagne.utils import floatX
from lasagne.layers import InputLayer
from lasagne.layers import Conv2DLayer as ConvLayer # can be replaced with dnn layers
from lasagne.layers import BatchNormLayer
from lasagne.layers import Pool2DLayer as PoolLayer
from lasagne.layers import NonlinearityLayer
from lasagne.layers import ElemwiseSumLayer
from lasagne.layers import DenseLayer
from lasagne.nonlinearities import rectify, softmax
Explanation: We need a lot of building blocks from Lasagne to build network
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 8, 6
import io
import urllib
import skimage.transform
from IPython.display import Image
import pickle
Explanation: Helper modules, some of them will help us to download images and plot them
End of explanation
Image(filename='images/head.png', width='40%')
Explanation: Build Lasagne model
BatchNormalization issue in caffe
Caffe doesn't have correct BN layer as described in https://arxiv.org/pdf/1502.03167.pdf:
* it can collect datasets mean ($\hat{\mu}$) and variance ($\hat{\sigma}^2$)
* it can't fit $\gamma$ and $\beta$ parameters to scale and shift standardized distribution of feature in following formula: $\hat{x}_i = \dfrac{x_i - \hat{\mu}_i}{\sqrt{\hat{\sigma}_i^2 + \epsilon}}\cdot\gamma + \beta$
To fix this issue, <a href="https://github.com/KaimingHe/deep-residual-networks">here</a> authors use such BN layer followed by Scale layer, which can fit scale and shift parameters, but can't standardize data:
<pre>
layer {
bottom: "res2a_branch1"
top: "res2a_branch1"
name: "bn2a_branch1"
type: "BatchNorm"
batch_norm_param {
use_global_stats: true
}
}
layer {
bottom: "res2a_branch1"
top: "res2a_branch1"
name: "scale2a_branch1"
type: "Scale"
scale_param {
bias_term: true
}
}
</pre>
In Lasagne we have correct BN layer, so we do not need use such a trick.
Replicated blocks
Simple blocks
ResNet contains a lot of similar replicated blocks, lets call them simple blocks, which have one of two architectures:
* Convolution $\rightarrow$ BN $\rightarrow$ Nonlinearity
* Convolution $\rightarrow$ BN
http://ethereon.github.io/netscope/#/gist/2f702ea9e05900300462102a33caff9c
End of explanation
Image(filename='images/conv1x1.png', width='40%')
Explanation: We can increase, decrease or keep same dimensionality of data using such blocks. In ResNet-50 only several transformation are used.
Keep shape with 1x1 convolution
We can apply nonlinearity transformation from (None, 64, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the origin of a network after first pool layer):
* num_filters: same as parent has
* filter_size: 1
* stride: 1
* pad: 0
End of explanation
Image(filename='images/conv3x3.png', width='40%')
Explanation: Keep shape with 3x3 convolution
Also we can apply nonlinearity transformation from (None, 64, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the middle of any residual blocks):
* num_filters: same as parent has
* filter_size: 3x3
* stride: 1
* pad: 1
End of explanation
Image(filename='images/increase_fn.png', width='40%')
Explanation: Increase shape using number of filters
We can nonlinearly increase shape from (None, 64, 56, 56) to (None, 256, 56, 56) if we apply simple block with following parameters (look at the last simple block of any risidual block):
* num_filters: four times greater then parent has
* filter_size: 1x1
* stride: 1
* pad: 0
End of explanation
Image(filename='images/decrease_fn.png', width='40%')
Explanation: Increase shape using number of filters
We can nonlinearly decrease shape from (None, 256, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the first simple block of any risidual block without left branch):
* num_filters: four times less then parent has
* filter_size: 1x1
* stride: 1
* pad: 0
End of explanation
Image(filename='images/decrease_fnstride.png', width='40%')
Explanation: Increase shape using number of filters
We can also nonlinearly decrease shape from (None, 256, 56, 56) to (None, 128, 28, 28) if we apply simple block with following parameters (look at the first simple block of any risidual block with left branch):
* num_filters: two times less then parent has
* filter_size: 1x1
* stride: 2
* pad: 0
End of explanation
def build_simple_block(incoming_layer, names,
num_filters, filter_size, stride, pad,
use_bias=False, nonlin=rectify):
Creates stacked Lasagne layers ConvLayer -> BN -> (ReLu)
Parameters:
----------
incoming_layer : instance of Lasagne layer
Parent layer
names : list of string
Names of the layers in block
num_filters : int
Number of filters in convolution layer
filter_size : int
Size of filters in convolution layer
stride : int
Stride of convolution layer
pad : int
Padding of convolution layer
use_bias : bool
Whether to use bias in conlovution layer
nonlin : function
Nonlinearity type of Nonlinearity layer
Returns
-------
tuple: (net, last_layer_name)
net : dict
Dictionary with stacked layers
last_layer_name : string
Last layer name
net = []
net.append((
names[0],
ConvLayer(incoming_layer, num_filters, filter_size, pad, stride,
flip_filters=False, nonlinearity=None) if use_bias
else ConvLayer(incoming_layer, num_filters, filter_size, stride, pad, b=None,
flip_filters=False, nonlinearity=None)
))
net.append((
names[1],
BatchNormLayer(net[-1][1])
))
if nonlin is not None:
net.append((
names[2],
NonlinearityLayer(net[-1][1], nonlinearity=nonlin)
))
return dict(net), net[-1][0]
Explanation: Following function creates simple block
End of explanation
Image(filename='images/left_branch.png', width='40%')
Image(filename='images/no_left_branch.png', width='40%')
simple_block_name_pattern = ['res%s_branch%i%s', 'bn%s_branch%i%s', 'res%s_branch%i%s_relu']
def build_residual_block(incoming_layer, ratio_n_filter=1.0, ratio_size=1.0, has_left_branch=False,
upscale_factor=4, ix=''):
Creates two-branch residual block
Parameters:
----------
incoming_layer : instance of Lasagne layer
Parent layer
ratio_n_filter : float
Scale factor of filter bank at the input of residual block
ratio_size : float
Scale factor of filter size
has_left_branch : bool
if True, then left branch contains simple block
upscale_factor : float
Scale factor of filter bank at the output of residual block
ix : int
Id of residual block
Returns
-------
tuple: (net, last_layer_name)
net : dict
Dictionary with stacked layers
last_layer_name : string
Last layer name
net = {}
# right branch
net_tmp, last_layer_name = build_simple_block(
incoming_layer, map(lambda s: s % (ix, 2, 'a'), simple_block_name_pattern),
int(lasagne.layers.get_output_shape(incoming_layer)[1]*ratio_n_filter), 1, int(1.0/ratio_size), 0)
net.update(net_tmp)
net_tmp, last_layer_name = build_simple_block(
net[last_layer_name], map(lambda s: s % (ix, 2, 'b'), simple_block_name_pattern),
lasagne.layers.get_output_shape(net[last_layer_name])[1], 3, 1, 1)
net.update(net_tmp)
net_tmp, last_layer_name = build_simple_block(
net[last_layer_name], map(lambda s: s % (ix, 2, 'c'), simple_block_name_pattern),
lasagne.layers.get_output_shape(net[last_layer_name])[1]*upscale_factor, 1, 1, 0,
nonlin=None)
net.update(net_tmp)
right_tail = net[last_layer_name]
left_tail = incoming_layer
# left branch
if has_left_branch:
net_tmp, last_layer_name = build_simple_block(
incoming_layer, map(lambda s: s % (ix, 1, ''), simple_block_name_pattern),
int(lasagne.layers.get_output_shape(incoming_layer)[1]*4*ratio_n_filter), 1, int(1.0/ratio_size), 0,
nonlin=None)
net.update(net_tmp)
left_tail = net[last_layer_name]
net['res%s' % ix] = ElemwiseSumLayer([left_tail, right_tail], coeffs=1)
net['res%s_relu' % ix] = NonlinearityLayer(net['res%s' % ix], nonlinearity=rectify)
return net, 'res%s_relu' % ix
Explanation: Residual blocks
ResNet also contains several residual blockes built from simple blocks, each of them have two branches; left branch sometimes contains simple block, sometimes not. Each block ends with Elementwise sum layer followed by ReLu nonlinearity.
http://ethereon.github.io/netscope/#/gist/410e7e48fa1e5a368ee7bca5eb3bf0ca
End of explanation
net = {}
net['input'] = InputLayer((None, 3, 224, 224))
sub_net, parent_layer_name = build_simple_block(
net['input'], ['conv1', 'bn_conv1', 'conv1_relu'],
64, 7, 3, 2, use_bias=True)
net.update(sub_net)
net['pool1'] = PoolLayer(net[parent_layer_name], pool_size=3, stride=2, pad=0, mode='max', ignore_border=False)
Explanation: Gathering everighting together
Create head of the network (everithing before first residual block)
End of explanation
block_size = list('abc')
parent_layer_name = 'pool1'
for c in block_size:
if c == 'a':
sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1, 1, True, 4, ix='2%s' % c)
else:
sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='2%s' % c)
net.update(sub_net)
block_size = list('abcd')
for c in block_size:
if c == 'a':
sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/2, 1.0/2, True, 4, ix='3%s' % c)
else:
sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='3%s' % c)
net.update(sub_net)
block_size = list('abcdef')
for c in block_size:
if c == 'a':
sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/2, 1.0/2, True, 4, ix='4%s' % c)
else:
sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='4%s' % c)
net.update(sub_net)
block_size = list('abc')
for c in block_size:
if c == 'a':
sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/2, 1.0/2, True, 4, ix='5%s' % c)
else:
sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='5%s' % c)
net.update(sub_net)
Explanation: Create four groups of residual blocks
End of explanation
net['pool5'] = PoolLayer(net[parent_layer_name], pool_size=7, stride=1, pad=0,
mode='average_exc_pad', ignore_border=False)
net['fc1000'] = DenseLayer(net['pool5'], num_units=1000, nonlinearity=None)
net['prob'] = NonlinearityLayer(net['fc1000'], nonlinearity=softmax)
print 'Total number of layers:', len(lasagne.layers.get_all_layers(net['prob']))
Explanation: Create tail of the network (everighting after last resudual block)
End of explanation
net_caffe = caffe.Net('./ResNet-50-deploy.prototxt', './ResNet-50-model.caffemodel', caffe.TEST)
layers_caffe = dict(zip(list(net_caffe._layer_names), net_caffe.layers))
print 'Number of layers: %i' % len(layers_caffe.keys())
Explanation: Transfer weights from caffe to lasagne
Load pretrained caffe model
End of explanation
for name, layer in net.items():
if name not in layers_caffe:
print name, type(layer).__name__
continue
if isinstance(layer, BatchNormLayer):
layer_bn_caffe = layers_caffe[name]
layer_scale_caffe = layers_caffe['scale' + name[2:]]
layer.gamma.set_value(layer_scale_caffe.blobs[0].data)
layer.beta.set_value(layer_scale_caffe.blobs[1].data)
layer.mean.set_value(layer_bn_caffe.blobs[0].data)
layer.inv_std.set_value(1/np.sqrt(layer_bn_caffe.blobs[1].data) + 1e-4)
continue
if isinstance(layer, DenseLayer):
layer.W.set_value(layers_caffe[name].blobs[0].data.T)
layer.b.set_value(layers_caffe[name].blobs[1].data)
continue
if len(layers_caffe[name].blobs) > 0:
layer.W.set_value(layers_caffe[name].blobs[0].data)
if len(layers_caffe[name].blobs) > 1:
layer.b.set_value(layers_caffe[name].blobs[1].data)
Explanation: Copy weights
There is one more issue with BN layer: caffa stores variance $\sigma^2$, but lasagne stores inverted standard deviation $\dfrac{1}{\sigma}$, so we need make simple transfommation to handle it.
Other issue reffers to weights ofthe dense layer, in caffe it is transposed, we should handle it too.
End of explanation
with open('./imagenet_classes.txt', 'r') as f:
classes = map(lambda s: s.strip(), f.readlines())
Explanation: Testing
Read ImageNet synset
End of explanation
index = urllib.urlopen('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').read()
image_urls = index.split('<br>')
np.random.seed(23)
np.random.shuffle(image_urls)
image_urls = image_urls[:100]
Explanation: Download some image urls for recognition
End of explanation
blob = caffe.proto.caffe_pb2.BlobProto()
data = open('./ResNet_mean.binaryproto', 'rb').read()
blob.ParseFromString(data)
mean_values = np.array(caffe.io.blobproto_to_array(blob))[0]
Explanation: Load mean values
End of explanation
def prep_image(url, fname=None):
if fname is None:
ext = url.split('.')[-1]
im = plt.imread(io.BytesIO(urllib.urlopen(url).read()), ext)
else:
ext = fname.split('.')[-1]
im = plt.imread(fname, ext)
h, w, _ = im.shape
if h < w:
im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)
else:
im = skimage.transform.resize(im, (h*256/w, 256), preserve_range=True)
h, w, _ = im.shape
im = im[h//2-112:h//2+112, w//2-112:w//2+112]
rawim = np.copy(im).astype('uint8')
im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)
im = im[::-1, :, :]
im = im - mean_values
return rawim, floatX(im[np.newaxis])
Explanation: Image loader
End of explanation
n = 5
m = 5
i = 0
for url in image_urls:
print url
try:
rawim, im = prep_image(url)
except:
print 'Failed to download'
continue
prob_lasangne = np.array(lasagne.layers.get_output(net['prob'], im, deterministic=True).eval())[0]
prob_caffe = net_caffe.forward_all(data=im)['prob'][0]
print 'Lasagne:'
res = sorted(zip(classes, prob_lasangne), key=lambda t: t[1], reverse=True)[:n]
for c, p in res:
print ' ', c, p
print 'Caffe:'
res = sorted(zip(classes, prob_caffe), key=lambda t: t[1], reverse=True)[:n]
for c, p in res:
print ' ', c, p
plt.figure()
plt.imshow(rawim.astype('uint8'))
plt.axis('off')
plt.show()
i += 1
if i == m:
break
print '\n\n'
model = {
'values': lasagne.layers.get_all_param_values(net['prob']),
'synset_words': classes,
'mean_image': mean_values
}
pickle.dump(model, open('./resnet50.pkl', 'wb'), protocol=-1)
Explanation: Lets take five images and compare prediction of Lasagne with Caffe
End of explanation |
8,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2018/2019
http
Step1: 1. Klasifikator stroja potpornih vektora (SVM)
(a)
Upoznajte se s razredom svm.SVC, koja ustvari implementira sučelje prema implementaciji libsvm. Primijenite model SVC s linearnom jezgrenom funkcijom (tj. bez preslikavanja primjera u prostor značajki) na skup podataka seven (dan niže) s $N=7$ primjera. Ispišite koeficijente $w_0$ i $\mathbf{w}$. Ispišite dualne koeficijente i potporne vektore. Završno, koristeći funkciju mlutils.plot_2d_svc_problem iscrtajte podatke, decizijsku granicu i marginu. Funkcija prima podatke, oznake i klasifikator (objekt klase SVC).
Izračunajte širinu dobivene margine (prisjetite se geometrije linearnih modela).
Step2: Q
Step3: (c)
Vratit ćemo se na skupove podataka outlier ($N=8$) i unsep ($N=8$) iz prošle laboratorijske vježbe (dani niže) i pogledati kako se model SVM-a nosi s njima. Naučite ugrađeni model SVM-a (s linearnom jezgrom) na ovim podatcima i iscrtajte decizijsku granicu (skupa s marginom). Također ispišite točnost modela korištenjem funkcije metrics.accuracy_score.
Step4: Q
Step5: 3. Optimizacija hiperparametara SVM-a
Pored hiperparametra $C$, model SVM s jezgrenom funkcijom RBF ima i dodatni hiperparametar $\gamma=\frac{1}{2\sigma^2}$ (preciznost). Taj parametar također određuje složenost modela
Step6: (b)
Pomoću funkcije datasets.make_classification generirajte dva skupa podataka od $N=200$ primjera
Step7: Q
Step8: (a)
Proučite funkciju za iscrtavanje histograma hist. Prikažite histograme vrijednosti značajki $x_0$ i $x_1$ (ovdje i u sljedećim zadatcima koristite bins=50).
Step9: (b)
Proučite razred preprocessing.MinMaxScaler. Prikažite histograme vrijednosti značajki $x_0$ i $x_1$ ako su iste skalirane min-max skaliranjem (ukupno dva histograma).
Step10: Q
Step11: Q
Step12: Q
Step13: (b)
Kako biste se uvjerili da je Vaša implementacija ispravna, usporedite ju s onom u razredu neighbors.KNeighborsClassifier. Budući da spomenuti razred koristi razne optimizacijske trikove pri pronalasku najboljih susjeda, obavezno postavite parametar algorithm=brute, jer bi se u protivnom moglo dogoditi da Vam se predikcije razlikuju. Usporedite modele na danom (umjetnom) skupu podataka (prisjetite se kako se uspoređuju polja; numpy.all).
Step14: 6. Analiza algoritma k-najbližih susjeda
Algoritam k-nn ima hiperparametar $k$ (broj susjeda). Taj hiperparametar izravno utječe na složenost algoritma, pa je stoga izrazito važno dobro odabrati njegovu vrijednost. Kao i kod mnogih drugih algoritama, tako i kod algoritma k-nn optimalna vrijednost hiperametra $k$ ovisi o konkretnom problemu, uključivo broju primjera $N$, broju značajki (dimenzija) $n$ te broju klasa $K$.
Kako bismo dobili pouzdanije rezultate, potrebno je neke od eksperimenata ponoviti na različitim skupovima podataka i zatim uprosječiti dobivene vrijednosti pogrešaka. Koristite funkciju
Step15: Q
Step16: Q | Python Code:
import numpy as np
import scipy as sp
import pandas as pd
import mlutils
import matplotlib.pyplot as plt
%pylab inline
Explanation: Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2018/2019
http://www.fer.unizg.hr/predmet/su
Laboratorijska vježba 3: Stroj potpornih vektora i algoritam k-najbližih susjeda
Verzija: 0.3
Zadnji put ažurirano: 9. studenog 2018.
(c) 2015-2017 Jan Šnajder, Domagoj Alagić
Objavljeno: 9. studenog 2018.
Rok za predaju: 3. prosinca 2018. u 07:00h
Upute
Treća laboratorijska vježba sastoji se od sedam zadataka. U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na dopunjavanje ove bilježnice: umetanja ćelije ili više njih ispod teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija.
Osigurajte da u potpunosti razumijete kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.
Vježbe trebate raditi samostalno. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.
End of explanation
from sklearn.svm import SVC
seven_X = np.array([[2,1], [2,3], [1,2], [3,2], [5,2], [5,4], [6,3]])
seven_y = np.array([1, 1, 1, 1, -1, -1, -1])
fit = SVC(gamma = 'scale', kernel = 'linear').fit(seven_X, seven_y)
mlutils.plot_2d_svc_problem(seven_X, seven_y, fit)
mlutils.plot_2d_clf_problem(seven_X, seven_y, fit.predict)
Explanation: 1. Klasifikator stroja potpornih vektora (SVM)
(a)
Upoznajte se s razredom svm.SVC, koja ustvari implementira sučelje prema implementaciji libsvm. Primijenite model SVC s linearnom jezgrenom funkcijom (tj. bez preslikavanja primjera u prostor značajki) na skup podataka seven (dan niže) s $N=7$ primjera. Ispišite koeficijente $w_0$ i $\mathbf{w}$. Ispišite dualne koeficijente i potporne vektore. Završno, koristeći funkciju mlutils.plot_2d_svc_problem iscrtajte podatke, decizijsku granicu i marginu. Funkcija prima podatke, oznake i klasifikator (objekt klase SVC).
Izračunajte širinu dobivene margine (prisjetite se geometrije linearnih modela).
End of explanation
from sklearn.metrics import hinge_loss
def hinge(model, x, y):
return max(0, 1-y*model.decision_function(x))
x1b = np.array([[3,2], [3.5, 2], [4,2]])
y1b = np.array([1, 1, -1])
suma = 0
for i in range(0, len(seven_X)):
suma += hinge(fit, seven_X[i].reshape(1,-1), seven_y[i])
my_hinge_loss = suma / len(seven_X)
print('my hinge loss: ' + str(my_hinge_loss[0]))
print('inbuild hinge loss: ' + str(hinge_loss(seven_y, fit.decision_function(seven_X))))
Explanation: Q: Koliko iznosi širina margine i zašto?
Q: Koji primjeri su potporni vektori i zašto?
(b)
Definirajte funkciju hinge(model, x, y) koja izračunava gubitak zglobnice modela SVM na primjeru x. Izračunajte gubitke modela naučenog na skupu seven za primjere $\mathbf{x}^{(2)}=(3,2)$ i $\mathbf{x}^{(1)}=(3.5,2)$ koji su označeni pozitivno ($y=1$) te za $\mathbf{x}^{(3)}=(4,2)$ koji je označen negativno ($y=-1$). Također, izračunajte prosječni gubitak SVM-a na skupu seven. Uvjerite se da je rezultat identičan onome koji biste dobili primjenom ugrađene funkcije metrics.hinge_loss.
End of explanation
from sklearn.metrics import accuracy_score
outlier_X = np.append(seven_X, [[12,8]], axis=0)
outlier_y = np.append(seven_y, -1)
unsep_X = np.append(seven_X, [[2,2]], axis=0)
unsep_y = np.append(seven_y, -1)
outlier = SVC(kernel = 'linear').fit(outlier_X, outlier_y)
outlier_accuracy = accuracy_score(outlier_y, outlier.predict(outlier_X))
print('outlier acc:')
print(outlier_accuracy)
unsep = SVC(kernel = 'linear').fit(unsep_X, unsep_y)
unsep_accuracy = accuracy_score(unsep_y, unsep.predict(unsep_X))
print('unsep acc:')
print(unsep_accuracy)
figure(figsize(12,4))
subplot(1,2,1)
mlutils.plot_2d_svc_problem(outlier_X, outlier_y, outlier)
mlutils.plot_2d_clf_problem(outlier_X, outlier_y, outlier.predict)
subplot(1,2,2)
mlutils.plot_2d_svc_problem(unsep_X, unsep_y, unsep)
mlutils.plot_2d_clf_problem(unsep_X, unsep_y, unsep.predict)
Explanation: (c)
Vratit ćemo se na skupove podataka outlier ($N=8$) i unsep ($N=8$) iz prošle laboratorijske vježbe (dani niže) i pogledati kako se model SVM-a nosi s njima. Naučite ugrađeni model SVM-a (s linearnom jezgrom) na ovim podatcima i iscrtajte decizijsku granicu (skupa s marginom). Također ispišite točnost modela korištenjem funkcije metrics.accuracy_score.
End of explanation
C = [10**(-2), 1, 10**2]
jezgra = ['linear', 'poly', 'rbf']
k = 1
figure(figsize(12, 10))
subplots_adjust(wspace=0.1, hspace = 0.2)
for i in C:
for j in jezgra:
uns = SVC(C = i, kernel = j, gamma='scale').fit(unsep_X, unsep_y)
h = uns.predict
subplot(3,3,k)
mlutils.plot_2d_svc_problem(unsep_X, unsep_y, uns)
mlutils.plot_2d_clf_problem(unsep_X, unsep_y, uns.predict)
title(str(i) + ', ' + j);
k+=1
Explanation: Q: Kako stršeća vrijednost utječe na SVM?
Q: Kako se linearan SVM nosi s linearno neodvojivim skupom podataka?
2. Nelinearan SVM
Ovaj zadatak pokazat će kako odabir jezgre utječe na kapacitet SVM-a. Na skupu unsep iz prošlog zadatka trenirajte tri modela SVM-a s različitim jezgrenim funkcijama: linearnom, polinomijalnom i radijalnom baznom (RBF) funkcijom. Varirajte hiperparametar $C$ po vrijednostima $C\in{10^{-2},1,10^2}$, dok za ostale hiperparametre (stupanj polinoma za polinomijalnu jezgru odnosno hiperparametar $\gamma$ za jezgru RBF) koristite podrazumijevane vrijednosti. Prikažite granice između klasa (i margine) na grafikonu organiziranome u polje $3x3$, gdje su stupci različite jezgre, a retci različite vrijednosti parametra $C$.
End of explanation
from sklearn.metrics import accuracy_score, zero_one_loss
def grid_search(X_train, X_validate, y_train, y_validate, c_range=(0,5), g_range=(0,5), error_surface=False):
# Vaš kôd ovdje
C = []
G = []
for c in range(c_range[0], c_range[1] + 1):
C.append(2**c)
for g in range(g_range[0], g_range[1] + 1):
G.append(2**g)
err_train = 0
err_validate = 0
err_minimum = inf
C_optimal = 0
G_optimal = 0
error_train_all = np.zeros((len(C), len(G)));
error_validate_all = np.zeros((len(C), len(G)));
for c in C:
for g in G:
svm = SVC(C=c, gamma=g).fit(X_train, y_train)
h_train = svm.predict(X_train)
err_train = zero_one_loss(y_train, h_train)
h_validate = svm.predict(X_validate)
err_validate = zero_one_loss(y_validate, h_validate)
error_train_all[C.index(c)][G.index(g)] = err_train
error_validate_all[C.index(c)][G.index(g)] = err_validate
if err_validate < err_minimum:
err_minimum = err_validate
C_optimal = c
G_optimal = g
if error_surface:
return C_optimal, G_optimal, error_train_all, error_validate_all
else:
return C_optimal, G_optimal
Explanation: 3. Optimizacija hiperparametara SVM-a
Pored hiperparametra $C$, model SVM s jezgrenom funkcijom RBF ima i dodatni hiperparametar $\gamma=\frac{1}{2\sigma^2}$ (preciznost). Taj parametar također određuje složenost modela: velika vrijednost za $\gamma$ znači da će RBF biti uska, primjeri će biti preslikani u prostor u kojem su (prema skalarnome produktu) međusobno vrlo različiti, što će rezultirati složenijim modelima. Obrnuto, mala vrijednost za $\gamma$ znači da će RBF biti široka, primjeri će biti međusobno sličniji, što će rezultirati jednostavnijim modelima. To ujedno znači da, ako odabremo veći $\gamma$, trebamo jače regularizirati model, tj. trebamo odabrati manji $C$, kako bismo spriječili prenaučenost. Zbog toga je potrebno zajednički optimirati hiperparametre $C$ i $\gamma$, što se tipično radi iscrpnim pretraživanjem po rešetci (engl. grid search). Ovakav pristup primjenjuje se kod svih modela koji sadrže više od jednog hiperparametra.
(a)
Definirajte funkciju
grid_search(X_train, X_validate, y_train, y_validate, c_range=(c1,c2), g_range=(g1,g2), error_surface=False)
koja optimizira parametre $C$ i $\gamma$ pretraživanjem po rešetci. Funkcija treba pretražiti hiperparametre $C\in{2^{c_1},2^{c_1+1},\dots,2^{c_2}}$ i $\gamma\in{2^{g_1},2^{g_1+1},\dots,2^{g_2}}$. Funkcija treba vratiti optimalne hiperparametre $(C^,\gamma^)$, tj. one za koje na skupu za provjeru model ostvaruju najmanju pogrešku. Dodatno, ako je surface=True, funkcija treba vratiti matrice (tipa ndarray) pogreške modela (očekivanje gubitka 0-1) na skupu za učenje i skupu za provjeru. Svaka je matrica dimenzija $(c_2-c_1+1)\times(g_2-g_1+1)$ (retci odgovaraju različitim vrijednostima za $C$, a stupci različitim vrijednostima za $\gamma$).
End of explanation
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
# Vaš kôd ovdje
X1, y1 = make_classification(n_samples=200, n_features=2, n_informative=2, n_redundant=0, n_clusters_per_class=2)
X1_train, X1_validate, y1_train, y1_validate = train_test_split(X1, y1, test_size = 0.5)
X2, y2 = make_classification(n_samples=200, n_features=1000, n_informative=1000, n_redundant=0, n_clusters_per_class=2)
X2_train, X2_validate, y2_train, y2_validate = train_test_split(X2, y2, test_size = 0.5)
c_range=(-5, 15)
g_range=(-15, 3)
C1_opt, G1_opt, err_train1, err_validate1 = grid_search(X1_train, X1_validate, y1_train, y1_validate, c_range, g_range, True)
C2_opt, G2_opt, err_train2, err_validate2 = grid_search(X2_train, X2_validate, y2_train, y2_validate, c_range, g_range, True)
print("C1 optimal:", C1_opt, "C2 optimal:", C2_opt)
print("gamma1 optimal:", G1_opt, "G2 optimal:", G2_opt)
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(2, 2, 1)
ax.grid()
mlutils.plot_error_surface(err_train1, c_range, g_range)
title("TRAIN1 error")
ax = fig.add_subplot(2, 2, 2)
ax.grid()
mlutils.plot_error_surface(err_train2, c_range, g_range)
title("TRAIN2 error")
ax = fig.add_subplot(2, 2, 3)
ax.grid()
mlutils.plot_error_surface(err_validate1, c_range, g_range)
title("VALIDATION1 errors")
ax = fig.add_subplot(2, 2, 4)
ax.grid()
mlutils.plot_error_surface(err_validate2, c_range, g_range)
title("VALIDATION2 errors")
Explanation: (b)
Pomoću funkcije datasets.make_classification generirajte dva skupa podataka od $N=200$ primjera: jedan s $n=2$ dimenzije i drugi s $n=1000$ dimenzija. Primjeri neka dolaze iz dviju klasa, s time da svakoj klasi odgovaraju dvije grupe (n_clusters_per_class=2), kako bi problem bio nešto složeniji, tj. nelinearniji. Neka sve značajke budu informativne. Podijelite skup primjera na skup za učenje i skup za ispitivanje u omjeru 1:1.
Na oba skupa optimirajte SVM s jezgrenom funkcijom RBF, u rešetci $C\in{2^{-5},2^{-4},\dots,2^{15}}$ i $\gamma\in{2^{-15},2^{-14},\dots,2^{3}}$. Prikažite površinu pogreške modela na skupu za učenje i skupu za provjeru, i to na oba skupa podataka (ukupno četiri grafikona) te ispišite optimalne kombinacije hiperparametara. Za prikaz površine pogreške modela možete koristiti funkciju mlutils.plot_error_surface.
End of explanation
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=500,n_features=2,n_classes=2,n_redundant=0,n_clusters_per_class=1, random_state=69)
X[:,1] = X[:,1]*100+1000
X[0,1] = 3000
mlutils.plot_2d_svc_problem(X, y)
Explanation: Q: Razlikuje li se površina pogreške na skupu za učenje i skupu za ispitivanje? Zašto?
Q: U prikazu površine pogreške, koji dio površine odgovara prenaučenosti, a koji podnaučenosti? Zašto?
Q: Kako broj dimenzija $n$ utječe na površinu pogreške, odnosno na optimalne hiperparametre $(C^, \gamma^)$?
Q: Preporuka je da povećanje vrijednosti za $\gamma$ treba biti popraćeno smanjenjem vrijednosti za $C$. Govore li vaši rezultati u prilog toj preporuci? Obrazložite.
4. Utjecaj standardizacije značajki kod SVM-a
U prvoj laboratorijskoj vježbi smo pokazali kako značajke različitih skala mogu onemogućiti interpretaciju naučenog modela linearne regresije. Međutim, ovaj problem javlja se kod mnogih modela pa je tako skoro uvijek bitno prije treniranja skalirati značajke, kako bi se spriječilo da značajke s većim numeričkim rasponima dominiraju nad onima s manjim numeričkim rasponima. To vrijedi i za SVM, kod kojega skaliranje nerijetko može znatno poboljšati rezultate. Svrha ovog zadataka jest eksperimentalno utvrditi utjecaj skaliranja značajki na točnost SVM-a.
Generirat ćemo dvoklasni skup od $N=500$ primjera s $n=2$ značajke, tako da je dimenzija $x_1$ većeg iznosa i većeg raspona od dimenzije $x_0$, te ćemo dodati jedan primjer koji vrijednošću značajke $x_1$ odskače od ostalih primjera:
End of explanation
figure(figsize(10, 5))
subplot(1,2,1)
hist(X[:,0], bins = 50);
subplot(1,2,2)
hist(X[:,1], bins = 50);
Explanation: (a)
Proučite funkciju za iscrtavanje histograma hist. Prikažite histograme vrijednosti značajki $x_0$ i $x_1$ (ovdje i u sljedećim zadatcima koristite bins=50).
End of explanation
from sklearn.preprocessing import MinMaxScaler
x0b = MinMaxScaler().fit_transform(X[:,0].reshape(-1,1), y)
x1b = MinMaxScaler().fit_transform(X[:,1].reshape(-1,1), y)
figure(figsize(10, 5))
subplot(1,2,1)
hist(x0b, bins = 50)
subplot(1,2,2)
hist(x1b, bins = 50)
Explanation: (b)
Proučite razred preprocessing.MinMaxScaler. Prikažite histograme vrijednosti značajki $x_0$ i $x_1$ ako su iste skalirane min-max skaliranjem (ukupno dva histograma).
End of explanation
from sklearn.preprocessing import StandardScaler
x0b = StandardScaler().fit_transform(X[:,0].reshape(-1,1), y)
x1b = StandardScaler().fit_transform(X[:,1].reshape(-1,1), y)
figure(figsize(10, 5))
subplot(1,2,1)
hist(x0b, bins = 50)
subplot(1,2,2)
hist(x1b, bins = 50)
Explanation: Q: Kako radi ovo skaliranje? <br>
Q: Dobiveni histogrami su vrlo slični. U čemu je razlika? <br>
(c)
Proučite razred preprocessing.StandardScaler. Prikažite histograme vrijednosti značajki $x_0$ i $x_1$ ako su iste skalirane standardnim skaliranjem (ukupno dva histograma).
End of explanation
from sklearn.model_selection import train_test_split
err_unscaled = []
err_std = []
err_minmax = []
for i in range(0, 30):
X_train, X_validate, y_train, y_validate = train_test_split(X, y, test_size = 0.5)
model_unscaled = SVC(gamma = 'scale').fit(X_train, y_train)
prediction_unscaled = model_unscaled.predict(X_validate)
err_unscaled.append(accuracy_score(y_validate, prediction_unscaled))
ss = StandardScaler()
X_std_train = ss.fit_transform(X_train)
X_std_valid = ss.transform(X_validate)
model_standard = SVC(gamma = 'scale').fit(X_std_train, y_train)
prediction_standard = model_standard.predict(X_std_valid)
err_std.append(accuracy_score(y_validate, prediction_standard))
mm = MinMaxScaler()
X_minmax_train = mm.fit_transform(X_train)
X_minmax_valid = mm.transform(X_validate)
model_minmax = SVC(gamma = 'scale').fit(X_minmax_train, y_train)
prediction_minmax = model_minmax.predict(X_minmax_valid)
err_minmax.append(accuracy_score(y_validate, prediction_minmax))
print('Unscaled')
print(mean(err_unscaled))
print('Std')
print(mean(err_std))
print('Min max')
print(mean(err_minmax))
Explanation: Q: Kako radi ovo skaliranje? <br>
Q: Dobiveni histogrami su vrlo slični. U čemu je razlika? <br>
(d)
Podijelite skup primjera na skup za učenje i skup za ispitivanje u omjeru 1:1. Trenirajte SVM s jezgrenom funkcijom RBF na skupu za učenje i ispitajte točnost modela na skupu za ispitivanje, koristeći tri varijante gornjeg skupa: neskalirane značajke, standardizirane značajke i min-max skaliranje. Koristite podrazumijevane vrijednosti za $C$ i $\gamma$. Izmjerite točnost svakog od triju modela na skupu za učenje i skupu za ispitivanje. Ponovite postupak više puta (npr. 30) te uprosječite rezultate (u svakom ponavljanju generirajte podatke kao što je dano na početku ovog zadatka).
NB: Na skupu za učenje treba najprije izračunati parametre skaliranja te zatim primijeniti skaliranje (funkcija fit_transform), dok na skupu za ispitivanje treba samo primijeniti skaliranje s parametrima koji su dobiveni na skupu za učenje (funkcija transform).
End of explanation
from numpy.linalg import norm
from collections import Counter
class KNN(object):
def __init__(self, n_neighbors=3):
self.n_neighbors = n_neighbors
self.domain = []
def fit(self, X_train, y_train):
for x,y in zip(X_train, y_train):
self.domain.append([x, y])
return self.domain
def predict(self, X_test):
pred = []
for x in X_test:
dist = []
y = []
counter = []
for xd, yd in self.domain:
dist.append(norm(x-xd))
y.append(yd)
dat = sorted(zip(dist, y))[0: self.n_neighbors]
for i in range(0, self.n_neighbors):
counter.append(dat[i][1])
pred.append((Counter(counter).most_common()[0])[0])
return pred
Explanation: Q: Jesu li rezultati očekivani? Obrazložite. <br>
Q: Bi li bilo dobro kada bismo funkciju fit_transform primijenili na cijelom skupu podataka? Zašto? Bi li bilo dobro kada bismo tu funkciju primijenili zasebno na skupu za učenje i zasebno na skupu za ispitivanje? Zašto?
5. Algoritam k-najbližih susjeda
U ovom zadatku promatrat ćemo jednostavan klasifikacijski model imena algoritam k-najbližih susjeda. Najprije ćete ga samostalno isprogramirati kako biste se detaljno upoznali s radom ovog modela, a zatim ćete prijeći na analizu njegovih hiperparametara (koristeći ugrađeni razred, radi efikasnosti).
(a)
Implementirajte klasu KNN, koja implementira algoritam $k$ najbližih susjeda. Neobavezan parametar konstruktora jest broj susjeda n_neighbours ($k$), čija je podrazumijevana vrijednost 3. Definirajte metode fit(X, y) i predict(X), koje služe za učenje modela odnosno predikciju. Kao mjeru udaljenosti koristite euklidsku udaljenost (numpy.linalg.norm; pripazite na parametar axis). Nije potrebno implementirati nikakvu težinsku funkciju.
End of explanation
from sklearn.datasets import make_classification
X_art, y_art = make_classification(n_samples=100, n_features=2, n_classes=2,
n_redundant=0, n_clusters_per_class=2,
random_state=69)
mlutils.plot_2d_clf_problem(X_art, y_art)
from sklearn.neighbors import KNeighborsClassifier
knn = KNN()
knn.fit(X_art, y_art)
predict_knn = knn.predict(X_art)
builtin = KNeighborsClassifier(algorithm = 'brute', n_neighbors = 3).fit(X_art, y_art)
predict_knc = builtin.predict(X_art)
print('diff')
print(norm(predict_knn - predict_knc))
Explanation: (b)
Kako biste se uvjerili da je Vaša implementacija ispravna, usporedite ju s onom u razredu neighbors.KNeighborsClassifier. Budući da spomenuti razred koristi razne optimizacijske trikove pri pronalasku najboljih susjeda, obavezno postavite parametar algorithm=brute, jer bi se u protivnom moglo dogoditi da Vam se predikcije razlikuju. Usporedite modele na danom (umjetnom) skupu podataka (prisjetite se kako se uspoređuju polja; numpy.all).
End of explanation
def knn_eval(n_instances=100, n_features=2, n_classes=2, n_informative=2, test_size=0.3, k_range=(1, 20), n_trials=100):
best_k = 0; train_errors = []; test_errors = [];
for k in range(k_range[0],k_range[1]+1):
train_error = []
test_error = []
for j in range(0, n_trials):
X, y = make_classification(n_samples=n_instances, n_features=n_features, n_classes=n_classes, n_informative=n_informative, n_redundant=0, n_clusters_per_class=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = test_size)
mod = KNeighborsClassifier(algorithm = 'brute', n_neighbors = k).fit(X_train, y_train)
prediction_train = mod.predict(X_train)
prediction_test = mod.predict(X_test)
train_error.append(zero_one_loss(y_train, prediction_train))
test_error.append(zero_one_loss(y_test, prediction_test))
train_errors.append(mean(train_error))
test_errors.append(mean(test_error))
best_k = k_range[0] + test_errors.index(min(test_errors))
return (best_k, train_errors, test_errors)
Explanation: 6. Analiza algoritma k-najbližih susjeda
Algoritam k-nn ima hiperparametar $k$ (broj susjeda). Taj hiperparametar izravno utječe na složenost algoritma, pa je stoga izrazito važno dobro odabrati njegovu vrijednost. Kao i kod mnogih drugih algoritama, tako i kod algoritma k-nn optimalna vrijednost hiperametra $k$ ovisi o konkretnom problemu, uključivo broju primjera $N$, broju značajki (dimenzija) $n$ te broju klasa $K$.
Kako bismo dobili pouzdanije rezultate, potrebno je neke od eksperimenata ponoviti na različitim skupovima podataka i zatim uprosječiti dobivene vrijednosti pogrešaka. Koristite funkciju: mlutils.knn_eval koja trenira i ispituje model k-najbližih susjeda na ukupno n_instances primjera, i to tako da za svaku vrijednost hiperparametra iz zadanog intervala k_range ponovi n_trials mjerenja, generirajući za svako od njih nov skup podataka i dijeleći ga na skup za učenje i skup za ispitivanje. Udio skupa za ispitivanje definiran je parametrom test_size. Povratna vrijednost funkcije jest četvorka (ks, best_k, train_errors, test_errors). Vrijednost best_k je optimalna vrijednost hiperparametra $k$ (vrijednost za koju je pogreška na skupu za ispitivanje najmanja). Vrijednosti train_errors i test_errors liste su pogrešaka na skupu za učenja odnosno skupu za testiranje za sve razmatrane vrijednosti hiperparametra $k$, dok ks upravo pohranjuje sve razmatrane vrijednosti hiperparametra $k$.
(a)
Na podatcima iz zadatka 5, pomoću funkcije mlutils.plot_2d_clf_problem iscrtajte prostor primjera i područja koja odgovaraju prvoj odnosno drugoj klasi. Ponovite ovo za $k\in[1, 5, 20, 100]$.
NB: Implementacija algoritma KNeighborsClassifier iz paketa scikit-learn vjerojatno će raditi brže od Vaše implementacije, pa u preostalim eksperimentima koristite nju.
End of explanation
hiperparams = range(1, 21)
N = [100, 500, 1000, 3000]
figure(figsize(11, 8))
i = 1
for n in N:
[k, err_tr, err_tst] = knn_eval(n_instances=n, k_range=(1, 20))
subplot(2,2,i)
plot(np.array(range(1,21)), err_tr)
plot(np.array(range(1,21)), err_tst)
scatter(k,err_tst[hiperparams.index(k)])
legend(['train error', 'test error', 'min test err'], loc = 'best', prop={'size':10})
title('\nN = ' + str(n) + '\nk = ' + str(k))
xticks(hiperparams)
grid()
i+=1
Explanation: Q: Kako $k$ utječe na izgled granice između klasa?
Q: Kako se algoritam ponaša u ekstremnim situacijama: $k=1$ i $k=100$?
(b)
Pomoću funkcije mlutils.knn_eval, iscrtajte pogreške učenja i ispitivanja kao funkcije hiperparametra $k\in{1,\dots,20}$, za $N={100, 500, 1000, 3000}$ primjera. Načinite 4 zasebna grafikona (generirajte ih u 2x2 polju). U svakoj iteraciji ispišite optimalnu vrijednost hiperparametra $k$ (najlakše kao naslov grafikona; vidi plt.title).
End of explanation
from sklearn.metrics.pairwise import pairwise_distances
from numpy.random import random, random_sample
dims = []
vals = []
cosine = []
for dim in range(1, 50):
arrs = random_sample((50, dim))
arrs2 = random_sample((50, dim))
# print(arrs)
dists = norm(pairwise_distances(X=arrs, Y=arrs2))
#print(dists)
dims.append(dim)
vals.append(dists)
dists = norm(pairwise_distances(X=arrs, Y=arrs2, metric='cosine'))
cosine.append(dists)
#print(dims)
#print(vals)
plot(dims, vals)
plot(dims, cosine)
grid()
Explanation: Q: Kako se mijenja optimalna vrijednost hiperparametra $k$ s obzirom na broj primjera $N$? Zašto?
Q: Kojem području odgovara prenaučenost, a kojem podnaučenost modela? Zašto?
Q: Je li uvijek moguće doseći pogrešku od 0 na skupu za učenje?
(c)
Kako bismo provjerili u kojoj je mjeri algoritam k-najbližih susjeda osjetljiv na prisustvo nebitnih značajki, možemo iskoristiti funkciju datasets.make_classification kako bismo generirali skup primjera kojemu su neke od značajki nebitne. Naime, parametar n_informative određuje broj bitnih značajki, dok parametar n_features određuje ukupan broj značajki. Ako je n_features > n_informative, onda će neke od značajki biti nebitne. Umjesto da izravno upotrijebimo funkciju make_classification, upotrijebit ćemo funkciju mlutils.knn_eval, koja samo preuzime ove parametre, ali nam omogućuje pouzdanije procjene.
Koristite funkciju mlutils.knn_eval na dva načina. U oba koristite $N=1000$ primjera, $n=10$ značajki i $K=5$ klasa, ali za prvi neka su svih 10 značajki bitne, a za drugi neka je bitno samo 5 od 10 značajki. Ispišite pogreške učenja i ispitivanja za oba modela za optimalnu vrijednost $k$ (vrijednost za koju je ispitna pogreška najmanja).
Q: Je li algoritam k-najbližih susjeda osjetljiv na nebitne značajke? Zašto?
Q: Je li ovaj problem izražen i kod ostalih modela koje smo dosad radili (npr. logistička regresija)?
Q: Kako bi se model k-najbližih susjeda ponašao na skupu podataka sa značajkama različitih skala? Detaljno pojasnite.
7. "Prokletstvo dimenzionalnosti"
"Prokletstvo dimenzionalnosti" zbirni je naziv za niz fenomena povezanih s visokodimenzijskim prostorima. Ti fenomeni, koji se uglavnom protive našoj intuiciji, u većini slučajeva dovode do toga da se s porastom broja dimenzija (značajki) smanjenje točnost modela.
Općenito, povećanje dimenzija dovodi do toga da sve točke u ulaznome prostoru postaju (u smislu euklidske udaljenosti) sve udaljenije jedne od drugih te se, posljedično, gube razlike u udaljenostima između točaka. Eksperimentalno ćemo provjeriti da je to doista slučaj. Proučite funkciju metrics.pairwise.pairwise_distances. Generirajte 100 slučajnih vektora u različitim dimenzijama $n\in[1,2,\ldots,50]$ dimenzija te izračunajte prosječnu euklidsku udaljenost između svih parova tih vektora. Za generiranje slučajnih vektora koristite funkciju numpy.random.random. Na istom grafu skicirajte krivulje prosječnih udaljenosti za euklidsku i kosinusnu udaljenost (parametar metric).
End of explanation |
8,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1
Step1: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
Step2: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output
Step3: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output
Step4: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output
Step5: Problem set #2
Step6: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output
Step7: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output
Step8: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output
Step9: EXTREME BONUS ROUND
Step10: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint
Step11: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint
Step12: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
Step13: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint
Step14: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
Step15: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output | Python Code:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
Explanation: Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1: List slices and list comprehensions
Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:
End of explanation
new_list = numbers_str.split(",")
numbers = [int(item) for item in new_list]
max(numbers)
Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
End of explanation
#len(numbers)
sorted(numbers)[10:]
Explanation: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
End of explanation
sorted([item for item in numbers if item % 3 == 0])
Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
End of explanation
from math import sqrt
# your code here
squared = []
for item in numbers:
if item < 100:
squared_numbers = sqrt(item)
squared.append(squared_numbers)
squared
Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
End of explanation
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
Explanation: Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
End of explanation
[item['name'] for item in planets if item['diameter'] > 2]
#I got one more planet!
Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
End of explanation
#sum([int(item['mass']) for item in planets])
sum([item['mass'] for item in planets])
Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
End of explanation
import re
planet_with_giant= [item['name'] for item in planets if re.search(r'\bgiant\b', item['type'])]
planet_with_giant
Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
End of explanation
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
End of explanation
[item for item in poem_lines if re.search(r'\b[a-zA-Z]{4}\b \b[a-zA-Z]{4}\b', item)]
Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
End of explanation
[item for item in poem_lines if re.search(r'\b[a-zA-Z]{5}\b.?$',item)]
Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
End of explanation
all_lines = " ".join(poem_lines)
Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
End of explanation
re.findall(r'[I] (\b\w+\b)', all_lines)
Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
End of explanation
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
End of explanation
menu = []
for item in entrees:
entrees_dictionary= {}
match = re.search(r'(.*) .(\d*\d\.\d{2})\ ?( - v+)?$', item)
if match:
name = match.group(1)
price= match.group(2)
#vegetarian= match.group(3)
if match.group(3):
entrees_dictionary['vegetarian']= True
else:
entrees_dictionary['vegetarian']= False
entrees_dictionary['name']= name
entrees_dictionary['price']= price
menu.append(entrees_dictionary)
menu
Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
Great work! You are done. Go cavort in the sun, or whatever it is you students do when you're done with your homework
End of explanation |
8,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tokenize the text using nltk
Step1: Assign POS tags to the words in the text
Step3: Normalize - return a list of tuples with the first item's periods removed.
Step4: This will be used to determine adjacent words in order to construct keyphrases with two words
Build Graph
Return a networkx graph instance.
Initialize an undirected graph.
Step6: Add edges to the graph (weighted by Levenshtein distance)
Step7: For example,
Step8: pageRank -
initial value of 1.0, error tolerance of 0.0001,
Step9: Most important words in ascending order of importance | Python Code:
word_tokens = nltk.word_tokenize(fread)
Explanation: Tokenize the text using nltk
End of explanation
tagged = nltk.pos_tag(word_tokens)
textlist = [x[0] for x in tagged]
# filter_for_tags
defaulttags = ['NN','JJ','NNP']
tagged_filtered = [item for item in tagged if item[1] in defaulttags]
tagged_filtered
Explanation: Assign POS tags to the words in the text
End of explanation
tagged_filtered_normalized = [(item[0].replace('.',''), item[1]) for item in tagged_filtered]
def unique_everseen(iterable, key=None):
List unique elements in order of appearance.
Examples:
unique_everseen('AAAABBBCCDAABBB') --> A B C D
unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in [x for x in iterable if x not in seen]:
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
unique_word_set=unique_everseen([x[0] for x in tagged_filtered_normalized])
word_set_list = list(unique_word_set)
word_set_list
Explanation: Normalize - return a list of tuples with the first item's periods removed.
End of explanation
gr = nx.Graph()
gr.add_nodes_from(word_set_list)
nodePairs = list(itertools.combinations(word_set_list,2))
Explanation: This will be used to determine adjacent words in order to construct keyphrases with two words
Build Graph
Return a networkx graph instance.
Initialize an undirected graph.
End of explanation
def levenshtein_distance(first,second):
Return the levenshtein distance between two strings.
http://rosettacode.org/wiki/Levenshtein_distance#Python
if len(first) > len(second):
first, second = second, first
distances = range(len(first)+1)
for index2, char2 in enumerate(second):
new_distances = [index2 + 1]
for index1, char1 in enumerate(first):
if char1 == char2:
new_distances.append(distances[index1])
else:
new_distances.append(1 + min((distances[index1],
distances[index1+1],
new_distances[-1])))
distances = new_distances
return distances[-1]
Explanation: Add edges to the graph (weighted by Levenshtein distance)
End of explanation
example_pair = nodePairs[0]; example_pair
levenshtein_distance( example_pair[0], example_pair[1] )
[( index2, char2 ) for index2, char2 in enumerate(example_pair[1])]
for pair in nodePairs:
firstString = pair[0]
secondString = pair[1]
levDistance = levenshtein_distance(firstString, secondString)
gr.add_edge(firstString, secondString, weight=levDistance)
Explanation: For example,
End of explanation
calculated_page_rank = nx.pagerank(gr, weight='weight')
Explanation: pageRank -
initial value of 1.0, error tolerance of 0.0001,
End of explanation
keyphrases = sorted(calculated_page_rank, key=calculated_page_rank.get, reverse=True)
Explanation: Most important words in ascending order of importance
End of explanation |
8,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 3
Imports
Step2: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution
Step3: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step4: Compute a 2d NumPy array called phi
Step6: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step7: Use interact to animate the plot_soliton_data function versus time. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 3
Imports
End of explanation
def soliton(x, t, c, a):
Return phi(x, t) for a soliton wave with constants c and a.
return 0.5 * c * ((1/(np.cosh(((c**0.5)/2)*(x - c*t - a))))**2)
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
Explanation: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:
$$
\phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} \left(x - ct - a \right) \right]
$$
The constant c is the velocity and the constant a is the initial location of the soliton.
Define soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.
End of explanation
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
Explanation: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:
End of explanation
# YOUR CODE HERE
t_arr, x_arr = np.meshgrid(t, x)
phi = soliton(x_arr, t_arr, c, a)
print(phi)
print(phi.shape)
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
Explanation: Compute a 2d NumPy array called phi:
It should have a dtype of float.
It should have a shape of (xpoints, tpoints).
phi[i,j] should contain the value $\phi(x[i],t[j])$.
End of explanation
def plot_soliton_data(i=0):
Plot the soliton data at t[i] versus x.
plt.plot(soliton(x, i, c, a))
plt.tick_params(axis = "x", direction = "out", length = 5)
plt.tick_params(axis = "y", direction = "out", length = 5)
plt.grid(True)
plt.box(False)
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
Explanation: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
End of explanation
# YOUR CODE HERE
interact(plot_soliton_data, i=(0.0, 20.0, 0.1))
assert True # leave this for grading the interact with plot_soliton_data cell
Explanation: Use interact to animate the plot_soliton_data function versus time.
End of explanation |
8,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
webpages module
author
Step1: field setting
invalid attribute
raises Attribute error
Step2: set pagefilename
the filename concatenates the basepath, relpaths, pagefilename, and pagefileext into the pagename when set
Step3: set figfilename
similarly the figfilename concatenates the basepath, relpaths, figfilename, and figfileext into the figname when set
Step4: render
basic template
Step5: tabs | Python Code:
%load_ext autoreload
%autoreload 2
import os, sys
path = os.path.abspath('../..'); sys.path.insert(0, path) if path not in sys.path else None
from IPython.display import HTML
from pywebify import webpage
Page = webpage.Webpage
Explanation: webpages module
author: kevin.tetz
description: webpages module tests
End of explanation
Page(blah=True)
Explanation: field setting
invalid attribute
raises Attribute error
End of explanation
page = Page(relpaths=['spam', 'eggs'])
page.pagefilename = 'newwebpage'
print(page.pagename)
Explanation: set pagefilename
the filename concatenates the basepath, relpaths, pagefilename, and pagefileext into the pagename when set
End of explanation
wp = Page(relpaths=['foo', 'bar', 'baz'])
page.figfilename = 'newfigure'
print(page.figname)
Explanation: set figfilename
similarly the figfilename concatenates the basepath, relpaths, figfilename, and figfileext into the figname when set
End of explanation
HTML(Page().content('<hr><h1>Hello World!</h1><hr>').render().pagename)
os.listdir(r'..\img')
HTML(Page().content('<hr><h1>Hello Image!</h1><hr>').imglink(r'..\img\favicon.png').render().pagename)
Explanation: render
basic template
End of explanation
page = Page()
page.pagefilename = 'tabs1'
page.pagelink().tabsinit('tab0').content('tab0 content!')
page.tabsnext('tab1').content('tab1 content?').tabsnext('tab2').content('hello tab 2!@?!')
HTML(page.tabsend().render().pagename)
Explanation: tabs
End of explanation |
8,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sinusoidal Steady State Voltage on a Transmission Line
The voltage on a lossless transmission line is given by
\begin{aligned} v(z,t) & = v_0 cos(\omega t - \beta z) + \left|{\Gamma_L}\right|v_o cos(\omega t + \beta z + \phi_L)\
& = \Re(\tilde{V}(z) e^{j\omega t} )\end{aligned}
where $\Re()$ is an operator that takes the real part of the enclosed expression, $\omega = 2\pi f$ ($f$ is the frequency of the sinusoidal voltage), $\beta$ is the wavenumber (propagation constant), and $\Gamma_L$ is the load reflection coefficient, which in general is complex such that $\Gamma_L = \left|\Gamma_L\right|\exp(j \phi_L)$. The phase velocity, $u$, is $\omega / \beta$. Since $u = \lambda f$, $\beta = 2 \pi / \lambda$, where $\lambda$ is the wavelength of the sinusoidal voltage.
The voltage phasor is
$$ \tilde{V}(z) = V^+_0 e^{-j\beta z}[1 + \Gamma(z)]$$
where we have used the generalized reflection coefficient
$$ \Gamma(z) = \Gamma_L e^{j2\beta z}. $$
Note that $V^+_0$ can in general be complex such that $V^+_0 = \left|V^+_0\right|e^{j\theta_V}$. The magnitude of the voltage phasor, $\tilde{V}(z)$, is the envelope of the time-varying real voltage and is called the standing wave. It can be calculated as
$$ \left|\tilde{V}(z)\right| = \left|V^+_0\right|\sqrt{1 + 2\left|\Gamma_L\right|cos(2\beta z + \theta_L) + \left|\Gamma_L\right|^2}.$$
The voltage standing wave ratio is given by
$$VSWR = \frac{\left|\tilde{V}(z)\right|{max}}{\left|\tilde{V}(z)\right|{min}} = \frac{1 + \left|\Gamma_L\right|}{1 - \left|\Gamma_L\right|}$$
Import packages and switch to correct matplotlib graphics backend for animations
Step1: Function definitions
Step2: Set transmission line parameters and plot voltages
Step3: Sinusoidal Steady State Current on a Transmission Line
The current phasor is
$$ \tilde{I}(z) = \frac{V^+_0}{Z_0} e^{-j\beta z}[1 - \Gamma(z)]$$
The current standing wave is the magnitude of the current phasor
Step4: Set transmission line parameters and plot CURRENTS | Python Code:
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
# Switch to a backend that supports FuncAnimation
plt.switch_backend('tkagg')
print 'Matplotlib graphics backend in use:',plt.get_backend()
Explanation: Sinusoidal Steady State Voltage on a Transmission Line
The voltage on a lossless transmission line is given by
\begin{aligned} v(z,t) & = v_0 cos(\omega t - \beta z) + \left|{\Gamma_L}\right|v_o cos(\omega t + \beta z + \phi_L)\
& = \Re(\tilde{V}(z) e^{j\omega t} )\end{aligned}
where $\Re()$ is an operator that takes the real part of the enclosed expression, $\omega = 2\pi f$ ($f$ is the frequency of the sinusoidal voltage), $\beta$ is the wavenumber (propagation constant), and $\Gamma_L$ is the load reflection coefficient, which in general is complex such that $\Gamma_L = \left|\Gamma_L\right|\exp(j \phi_L)$. The phase velocity, $u$, is $\omega / \beta$. Since $u = \lambda f$, $\beta = 2 \pi / \lambda$, where $\lambda$ is the wavelength of the sinusoidal voltage.
The voltage phasor is
$$ \tilde{V}(z) = V^+_0 e^{-j\beta z}[1 + \Gamma(z)]$$
where we have used the generalized reflection coefficient
$$ \Gamma(z) = \Gamma_L e^{j2\beta z}. $$
Note that $V^+_0$ can in general be complex such that $V^+_0 = \left|V^+_0\right|e^{j\theta_V}$. The magnitude of the voltage phasor, $\tilde{V}(z)$, is the envelope of the time-varying real voltage and is called the standing wave. It can be calculated as
$$ \left|\tilde{V}(z)\right| = \left|V^+_0\right|\sqrt{1 + 2\left|\Gamma_L\right|cos(2\beta z + \theta_L) + \left|\Gamma_L\right|^2}.$$
The voltage standing wave ratio is given by
$$VSWR = \frac{\left|\tilde{V}(z)\right|{max}}{\left|\tilde{V}(z)\right|{min}} = \frac{1 + \left|\Gamma_L\right|}{1 - \left|\Gamma_L\right|}$$
Import packages and switch to correct matplotlib graphics backend for animations
End of explanation
def vplus(v0,f,t,beta,z):
return v0*np.cos(2*np.pi*f*t - beta*z)
def vminus(v0,f,t,beta,z,gammaLmagnitude,gammaLphase_rad):
return gammaLmagnitude*v0*np.cos(2*np.pi*f*t + beta*z + gammaLphase_rad)
def vtotal(v0,f,t,beta,z,gammaLmagnitude,gammaLphase_rad):
return vplus(v0,f,t,beta,z) + vminus(v0,f,t,beta,z,gammaLmagnitude,gammaLphase_rad)
def phasormagnitude(v0,f,beta,z,gammaLmagnitude,gammaLphase_rad):
return v0*np.sqrt(1 + 2*gammaLmagnitude*np.cos(2*beta*z + gammaLphase_rad) + gammaLmagnitude**2)
# Return string containing text version of complex number
# Handle special cases: angle = 0, pi, -pi, pi/2, and -pi/2
def complextostring(complexnum):
tolerance = 1.0e-3
angle = np.angle(complexnum)
if angle < tolerance and angle > -tolerance: # angle is essentially 0.0
tempstr = "%.2f" % abs(complexnum)
elif angle > np.pi - tolerance or angle < -np.pi + tolerance: # angle close to +pi or -pi?
tempstr = "-%.2f" % abs(complexnum)
elif angle < np.pi/2 + tolerance and angle > np.pi/2 - tolerance: # angle close to np.pi/2?
tempstr = "j%.2f" % abs(complexnum)
elif angle < -np.pi/2 + tolerance and angle > -np.pi/2 - tolerance: # angle close to -np.pi/2?
tempstr = "-j%.2f" % abs(complexnum)
elif angle < 0.0: # put negative sign in front of j, otherwise it will be between j and the number
tempstr = "%.2f exp(-j%.2f)" % (abs(complexnum), -angle)
else:
tempstr = "%.2f exp(j%.2f)" % (abs(complexnum), angle)
return tempstr
Explanation: Function definitions
End of explanation
#-------------------------------------------------------------------------
#
# Set these parameters to model desired transmission line situation
#
#-------------------------------------------------------------------------
# Specify sinusoidal voltage parameters & reflection coefficient
wavelength_m = 2.0 # wavelength in meters
v0 = 1.0 # voltage amplitude in volts
reflcoeffmagn = 1.0 # magnitude of the reflection coefficient
reflcoeffphase_degrees = 0.0 # phase of the reflection coefficient IN DEGREES! (changed 1/21/15)
velocity_mps = 2.0e8 # voltage phase velocity along transmission line
#-------------------------------------------------------------------------
#
# Don't change anything below this point
#
#-------------------------------------------------------------------------
# Set up plot parameters for transmission line
zmin = -10
zmax = 0
numzpnts = 1000
# Set up animation parameters
numframes = 20
framespersec = 15
frameperiod_msec = int(1000.0*float(1)/framespersec)
#print 'Frame period = %d ms' % frameperiod_msec
# Calculate derived parameters
beta = 2*np.pi/wavelength_m
frequency_Hz = velocity_mps / wavelength_m
period_s = 1.0/frequency_Hz
reflcoeffphase_rad = np.radians(reflcoeffphase_degrees)
# Set up sampling grid along transmission line
z = np.linspace(zmin, zmax, numzpnts)
# Calculate standing wave
standingwave = phasormagnitude(v0,frequency_Hz,beta,z,reflcoeffmagn,reflcoeffphase_rad)
standingwavemax = max(standingwave)
standingwavemin = min(standingwave)
if standingwavemin > 1.0e-2:
vswr_text = standingwavemax/standingwavemin
vswr_text = '\nVSWR = %.2f' % vswr_text
else:
vswr_text = '\nVSWR = $\infty$'
# Set up text for plot label
reflcoeffcmplx = reflcoeffmagn * complex(np.cos(reflcoeffphase_rad),np.sin(reflcoeffphase_rad))
labeltext = '$\Gamma_L$ = ' + complextostring(reflcoeffcmplx)
labeltext += '\n$\lambda$ = %.2f m' % wavelength_m
labeltext += '\nf = %.2e Hz' % frequency_Hz
labeltext += '\nu = %.2e m/s' % velocity_mps
labeltext += '\n$V_0$ = %.2f V' % v0
labeltext += vswr_text
# Set up figure, axis, and plot elements, including those to animate (i.e., line1, line2, line3)
fig2 = plt.figure()
ax2 = plt.axes(xlim=(zmin, zmax), ylim=(-2, 4))
line1, = ax2.plot([], [], 'b--', label='$v^+$')
line2, = ax2.plot([], [], 'r--', label='$v^-$')
line3, = ax2.plot([], [], 'g', label='$v_{total} = v^+ + v^-$')
line4, = ax2.plot(z,standingwave, color='black', label='$\mathrm{Standing} \/ \mathrm{wave}$')
ax2.axhline(y=0.0,ls='dotted',color='k')
ax2.legend(loc='upper left')
ax2.set_xlabel('z (m)')
ax2.set_ylabel('Voltage (V)')
ax2.set_title('Transmission Line Voltage - Sinusoidal Steady State')
# initialization function (background of each frame)
def init():
ax2.text(0.55,0.75,labeltext, transform = ax2.transAxes)
line1.set_data([], [])
line2.set_data([], [])
line3.set_data([], [])
#ax2.legend.set_zorder(20)
return line1, line2, line3,
# animation function - called sequentially
def animate_vplusandminus(i):
t = period_s * float(i)/numframes
vp = vplus(v0,frequency_Hz,t,beta,z)
line1.set_data(z, vp)
vm = vminus(v0,frequency_Hz,t,beta,z,reflcoeffmagn,reflcoeffphase_rad)
line2.set_data(z, vm)
vtot = vp + vm
line3.set_data(z, vtot)
return line1, line2, line3,
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig2, animate_vplusandminus, init_func=init,
frames=numframes, interval=frameperiod_msec, blit=True)
plt.show()
Explanation: Set transmission line parameters and plot voltages
End of explanation
# Define function to calculate the magnitude of the current phasor
def currentphasormagnitude(v0,f,beta,z,gammaLmagnitude,gammaLphase_rad,z0):
return (v0/z0)*np.sqrt(1 - 2*gammaLmagnitude*np.cos(2*beta*z + gammaLphase_rad) + gammaLmagnitude**2)
Explanation: Sinusoidal Steady State Current on a Transmission Line
The current phasor is
$$ \tilde{I}(z) = \frac{V^+_0}{Z_0} e^{-j\beta z}[1 - \Gamma(z)]$$
The current standing wave is the magnitude of the current phasor:
$$ \left|\tilde{I}(z)\right| = \frac{\left|V^+_0\right|}{Z_0}\sqrt{1 - 2\left|\Gamma_L\right|cos(2\beta z + \theta_L) + \left|\Gamma_L\right|^2}.$$
End of explanation
#-------------------------------------------------------------------------
#
# Set these parameters to model desired transmission line situation
#
#-------------------------------------------------------------------------
# Specify sinusoidal voltage parameters & reflection coefficient
wavelength_m = 2.0 # wavelength in meters
v0 = 1.0 # voltage amplitude in volts
reflcoeffmagn = 0.5 # magnitude of the reflection coefficient
reflcoeffphase_degrees = 0.0 # phase of the reflection coefficient IN DEGREES! (changed 1/21/15)
velocity_mps = 2.0e8 # voltage phase velocity along transmission line
z0 = 50.0 # t-line characteristic impedance
#-------------------------------------------------------------------------
#
# Don't change anything below this point
#
#-------------------------------------------------------------------------
# Set up plot parameters for transmission line
zmin = -10
zmax = 0
numzpnts = 1000
# Set up animation parameters
numframes = 20
framespersec = 15
frameperiod_msec = int(1000.0*float(1)/framespersec)
#print 'Frame period = %d ms' % frameperiod_msec
# Calculate derived parameters
beta = 2*np.pi/wavelength_m
frequency_Hz = velocity_mps / wavelength_m
period_s = 1.0/frequency_Hz
reflcoeffphase_rad = np.radians(reflcoeffphase_degrees)
# Set up sampling grid along transmission line
z = np.linspace(zmin, zmax, numzpnts)
# Calculate standing wave
standingwave = currentphasormagnitude(v0,frequency_Hz,beta,z,reflcoeffmagn,reflcoeffphase_rad,z0)
standingwavemax = max(standingwave)
standingwavemin = min(standingwave)
if standingwavemin > 1.0e-2:
vswr_text = standingwavemax/standingwavemin
vswr_text = '\nVSWR = %.2f' % vswr_text
else:
vswr_text = '\nVSWR = $\infty$'
# Set up text for plot label
reflcoeffcmplx = reflcoeffmagn * complex(np.cos(reflcoeffphase_rad),np.sin(reflcoeffphase_rad))
labeltext = '$\Gamma_L$ = ' + complextostring(reflcoeffcmplx)
labeltext += '\n$\lambda$ = %.2f m' % wavelength_m
labeltext += '\nf = %.2e Hz' % frequency_Hz
labeltext += '\nu = %.2e m/s' % velocity_mps
labeltext += '\n$V_0$ = %.2f V' % v0
labeltext += '\n$Z_0$ = %.2f $\Omega$' % z0
labeltext += vswr_text
# Set up figure, axis, and plot elements, including those to animate (i.e., line1, line2, line3)
fig2 = plt.figure()
ax2 = plt.axes(xlim=(zmin, zmax), ylim=(-2.0/z0, 4.0/z0))
line1, = ax2.plot([], [], 'b--', label='$i^+$')
line2, = ax2.plot([], [], 'r--', label='$i^-$')
line3, = ax2.plot([], [], 'g', label='$i_{total} = i^+ + i^-$')
line4, = ax2.plot(z,standingwave, color='black', label='$\mathrm{Current} \/ \mathrm{standing} \/ \mathrm{wave}$')
ax2.axhline(y=0.0,ls='dotted',color='k')
ax2.legend(loc='upper left')
ax2.set_xlabel('z (m)')
ax2.set_ylabel('Current (A)')
ax2.set_title('Transmission Line Current - Sinusoidal Steady State')
# initialization function (background of each frame)
def init():
ax2.text(0.55,0.7,labeltext, transform = ax2.transAxes)
line1.set_data([], [])
line2.set_data([], [])
line3.set_data([], [])
#ax2.legend.set_zorder(20)
return line1, line2, line3,
# animation function - called sequentially
def animate_vplusandminus(i):
t = period_s * float(i)/numframes
ip = vplus(v0,frequency_Hz,t,beta,z) / z0
line1.set_data(z, ip)
im = -vminus(v0,frequency_Hz,t,beta,z,reflcoeffmagn,reflcoeffphase_rad) / z0
line2.set_data(z, im)
itot = ip + im
line3.set_data(z, itot)
return line1, line2, line3,
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig2, animate_vplusandminus, init_func=init,
frames=numframes, interval=frameperiod_msec, blit=True)
plt.show()
Explanation: Set transmission line parameters and plot CURRENTS
End of explanation |
8,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="center"><h1>Image Processing with Hybridizer</h1></div>
Image processing is most often an embarassingly parallel problem. It naturally fits on the GPU.
In this lab, we will study optimization techniques through the implementation and analysis of the median filter, a robust denoising filter.
Prerequisites
To get the most out of this lab, you should already be able to
Step1: Distributing on CPU Threads
To accelerate calculations on CPU, we can distribute the work on threads. For this, make use Parallel.For construct on the lines to process lines in parallel. note that each thread may require a separate buffer.
Modify 01-parfor-csharp.cs to make use of CPU parallelism.
Should you need, have a look at the solution.
Step2: Running on the GPU
In order to run the filter on the GPU, three operations are needed
Step3: Memory Allocation
The obtained performance appears to be very low. Let's investigate.
In the Parallel.For, an array is allocated for each line in the image. A malloc on the GPU, done for each thread, is very expensive (given execution configuration - we will see that later on - there are thousands of calls).
To reduce this, we don't allocate memory dynamically on the heap, but rather on the stack (local memory - or registers in best cases), which is allocated once at kernel startup. To this aim, we expose a class for which constructor will be mapped by a C array declaration
Step4: Feeding the Beast
A modern GPU is made of thousands of CUDA-cores, for which operations need to be stacked to hide latency. In our image processing example, the distribution of work is done on lines, that is a couple of thousand for the whole image. Then, each CUDA-thread will process a complete line, as illustrated in the following image
Step5: Distributing 1960 lines by blocks of 128 threads results in 16 busy blocks, which uses a fraction of most GPUs and does not hide latency.
Extracting more parallism can be achieved by distributing the work dicing the image in little squares instead of stripes as illustrated here
Step6: Profiling application
Profiling the application may be done with nvprof. The following execution box provides the command line. We are querying the following metrics
Step7: Static Sort
We can see that the number of local transactions are many
Step8: Local load and store transactions should have been reduced to zero.
The display of kernel time actually includes some memory transfer and synchronization. The real execution time is returned by the profiler when run in summary mode.
Step9: Cache Coherence
GPUs are equiped with L1 and L2 caches, which are used automatically. There are other cache types such as the texture cache or the constant cache. All of these will help improve data locality.
In the context of this application, the traversal of data can be predicted. Instead of dicing the image, an have all threads load around fifty values (in our example), we can arrange calculations to reuse the previously loaded data, and recude the cache pressure from 49 down to 7.
For this, we take a bit more control on the parallel loop using explicit work distribution
Step10: The number of gld_transactions and l2_read_transactions should significantly reduce.
Step11: It is also possible to query the utilization of the different pipelines. In this last version, we should have a high utilization on the single_precision_fu_utilization, and low utilization for the other units. We are compute bound ! | Python Code:
import platform
if platform.system() == "Windows" : # create directory on Windows
!mkdir output-01-naive
if platform.system() == "Linux" : # create directory on Linux
!mkdir -p ./output-01-naive
!hybridizer-cuda ./01-naive/01-naive-csharp.cs graybitmap.cs -o ./01-naive/01-naive-csharp.exe -run
# convert bmp to png to have interactive display
from PIL import Image
img = Image.open('./output-01-naive/denoised.bmp')
img.save('./output-01-naive/denoised.png', 'png')
from IPython.display import Image
Image(filename="./output-01-naive/denoised.png", width=384, height=384)
Explanation: <div align="center"><h1>Image Processing with Hybridizer</h1></div>
Image processing is most often an embarassingly parallel problem. It naturally fits on the GPU.
In this lab, we will study optimization techniques through the implementation and analysis of the median filter, a robust denoising filter.
Prerequisites
To get the most out of this lab, you should already be able to:
- Write, compile, and run C# programs that both call CPU functions and launch GPU kernels.
- Control parallel thread hierarchy using execution configuration.
- Have some notions on images
Objectives
By the time you complete this lab, you will be able to:
- Accelerate image processing algorithms with Hybridizer and GPUs
- Explore three different work distribution patterns for image processing
- Allocate data into registers
- Some profiling elements for pipeline usage and cache usage
Median Filter
The median filter is a non-linear image filter. It is a robust filter used to remove noise in images. For a given window size, the median of values within that window is used to represent the output. Depending on the size of the window, the results will vary, and a size of 1x1 outputs the same image. Illustration of the filter:
<img title="median-filter.png" src="./images/median-filter.png"/>
Unlike gaussian or related filters, calculating the median requires a sort, which adds to the complexity for an efficient implementation. It is a very data intensive filter with no arithmetic operation: given window data, the median is extracted, by sorting the values in the window. From an output pixel to an adjacent, most of data is shared and we will see how to make use of this overlap.
Working Set
In this lab, we will be processing an reference image (on the left), onto which noise has been artificially added : white pixels have been randomly added on the input image (on the right).
<div style="display:table;margin:0 auto"><div style="display:block;float:left"><img title="lena_highres_greyscale.bmp" src="./images/lena_highres_greyscale.bmp" width="384"/></div><div style="display:block;float:left;margin-left:32px"><img title="lena_highres_greyscale_noise.bmp" src="./images/lena_highres_greyscale_noise.bmp" width="384"/></div></div>
First Naive Implementation
We start the implementation of the filter with a first naive approach as follows:
```csharp
public static void NaiveCsharp(ushort[] output, ushort[] input, int width, int height)
{
int windowCount = 2 * window + 1;
var buffer = new ushort[windowCount * windowCount];
for (int j = window; j < height - window; ++j)
{
for (int i = window; i < width - window; ++i)
{
for (int k = -window; k <= window; ++k)
{
for (int p = -window; p <= window; ++p)
{
int bufferIndex = (k + window) * windowCount + p + window;
int pixelIndex = (j + k) * width + (i + p);
buffer[bufferIndex] = input[pixelIndex];
}
}
Array.Sort(buffer, 0, windowCount * windowCount);
output[j * width + i] = buffer[(windowCount * windowCount) / 2];
}
}
}
```
This approach has no inherent parallelism, yet each loop iteration is independent. We will focus on the core part of the calculation, and leave borders management outside of the scope of this lab.
The 01-naive-csharp.cs (<---- click on the link of the source file to open it in another tab for editing) contains a program that is already working. It will load the input noisy image, process it and save the image in an output directory.
Use the below code cell to execute the program and display the output.
End of explanation
import platform
if platform.system() == "Windows" : # create directory on Windows
!mkdir output-02-parfor
if platform.system() == "Linux" : # create directory on Linux
!mkdir -p ./output-02-parfor
!hybridizer-cuda ./02-parallel-for/01-parfor-csharp.cs graybitmap.cs -o ./02-parallel-for/01-parfor-csharp.exe -run
# convert bmp to png to have interactive display
from PIL import Image
img = Image.open('./output-02-parfor/denoised.bmp')
img.save('./output-02-parfor/denoised.png', 'png')
from IPython.display import Image
Image(filename="./output-02-parfor/denoised.png", width=384, height=384)
Explanation: Distributing on CPU Threads
To accelerate calculations on CPU, we can distribute the work on threads. For this, make use Parallel.For construct on the lines to process lines in parallel. note that each thread may require a separate buffer.
Modify 01-parfor-csharp.cs to make use of CPU parallelism.
Should you need, have a look at the solution.
End of explanation
import platform
if platform.system() == "Windows" : # create directory on Windows
!mkdir output-03-naive-gpu
if platform.system() == "Linux" : # create directory on Linux
!mkdir -p ./output-03-naive-gpu
!hybridizer-cuda ./03-naive-gpu/01-naive-gpu.cs graybitmap.cs -intrinsics bitonicsort.cuh=./ -o ./03-naive-gpu/01-naive-gpu.exe -run
# convert bmp to png to have interactive display
from PIL import Image
img = Image.open('./output-03-naive-gpu/denoised.bmp')
img.save('./output-03-naive-gpu/denoised.png', 'png')
from IPython.display import Image
Image(filename="./output-03-naive-gpu/denoised.png", width=384, height=384)
Explanation: Running on the GPU
In order to run the filter on the GPU, three operations are needed:
- Mark the method with an EntryPoint attribute to indicate it should run on GPU
- Launch the kernel using the HybRunner, generated with static method HybRunner.Cuda()
- Array.Sort is not a builtin mapped to an existing code, the sort will be changed (this is out of the scope of this lab and will be already done in the input file)
Creating an instance of HybRunner and wrapping an object is done as follow:
csharp
HybRunner runner = HybRunner.Cuda();
dynamic wrapper = runner.Wrap(new Program());
Note that the result of the Wrap method is a dynamic type generated on the fly by the runner. It exposes the same methods as the wrapped type, with the same signature. Hence, launching the kernel is simply done by calling the method using the wrapper instance instead of the base instance (or no instance for static methods).
We will start from the Parallel.For version of the code. This expression of parallelism is interpreted by Hybridizer and transformed into a grid stride loop on threads and blocks. Hence adding the EntryPoint attribute should suffice.
Modify 01-naive-gpu.cs to make sure the method runs on GPU.
Should you need, refer to the solution.
End of explanation
import platform
if platform.system() == "Windows" : # create directory on Windows
!mkdir output-04-stack-gpu
if platform.system() == "Linux" : # create directory on Linux
!mkdir -p ./output-04-stack-gpu
!hybridizer-cuda ./04-stack-gpu/01-stack-gpu.cs graybitmap.cs -intrinsics bitonicsort.cuh=./ -o ./04-stack-gpu/01-stack-gpu.exe -run
# convert bmp to png to have interactive display
from PIL import Image
img = Image.open('./output-04-stack-gpu/denoised.bmp')
img.save('./output-04-stack-gpu/denoised.png', 'png')
from IPython.display import Image
Image(filename="./output-04-stack-gpu/denoised.png", width=384, height=384)
Explanation: Memory Allocation
The obtained performance appears to be very low. Let's investigate.
In the Parallel.For, an array is allocated for each line in the image. A malloc on the GPU, done for each thread, is very expensive (given execution configuration - we will see that later on - there are thousands of calls).
To reduce this, we don't allocate memory dynamically on the heap, but rather on the stack (local memory - or registers in best cases), which is allocated once at kernel startup. To this aim, we expose a class for which constructor will be mapped by a C array declaration:
csharp
var buffer = new StackArray<byte>(size) ;
Will get translated into:
c++
unsigned char buffer[size] ;
For this declaration to be valid, size shall be a compile-time constant. There are three ways for obtaining compile-time constants:
- Litteral constants, e.g. buffer = new StackArray<byte>(42)
- Constants defined using the HybridConstant attribute on static data, or IntrinsicConstant attribute on a property or method
- Class constants, assuming compiler will replace those during MSIL generation
Here is an example
```csharp
class Filter
{
const int window = 5 ;
const int windowCount = 2 * window + 1 ;
[EntryPoint]
public void F()
{
var buffer = new StackArray<byte>(windowCount * windowCount) ;
ushort[] contents = buffer.data ;
}
}
```
This leads to a limitation in variety of filters as the size needs to be provided at compile time. We will discuss this point later on.
Modify 01-stack-gpu.cs to allocate data on the stack instead of the heap.
Should you need, refer to the solution.
End of explanation
!hybridizer-cuda ./05-dice-gpu/01-query-config.cs -o ./05-dice-gpu/01-query-config.exe -run
Explanation: Feeding the Beast
A modern GPU is made of thousands of CUDA-cores, for which operations need to be stacked to hide latency. In our image processing example, the distribution of work is done on lines, that is a couple of thousand for the whole image. Then, each CUDA-thread will process a complete line, as illustrated in the following image:
<img title="SlicingWork" src="./images/work-stripes.png"/>
Using CUDA API, we may query the number of CUDA cores multiprocessors, and the number of CUDA cores. HybRunner has a default execution configuration that is suited to most use-cases.
Run the following code to query information on the GPU and execution configuration - see also cudaGetDeviceProperties.
End of explanation
import platform
if platform.system() == "Windows" : # create directory on Windows
!mkdir output-05-dice-gpu
if platform.system() == "Linux" : # create directory on Linux
!mkdir -p ./output-05-dice-gpu
!hybridizer-cuda ./05-dice-gpu/02-dice-gpu.cs graybitmap.cs -intrinsics bitonicsort.cuh=./ -o ./05-dice-gpu/02-dice-gpu.exe -run
# convert bmp to png to have interactive display
from PIL import Image
img = Image.open('./output-05-dice-gpu/denoised.bmp')
img.save('./output-05-dice-gpu/denoised.png', 'png')
from IPython.display import Image
Image(filename="./output-05-dice-gpu/denoised.png", width=384, height=384)
Explanation: Distributing 1960 lines by blocks of 128 threads results in 16 busy blocks, which uses a fraction of most GPUs and does not hide latency.
Extracting more parallism can be achieved by distributing the work dicing the image in little squares instead of stripes as illustrated here:
<img title="SlicingWork" src="./images/work-dice.png"/>
The amount of work can be distributed up to 4 Million entries which is sufficiently above the number of GPUs working units.
In order to enable this with little effort, Parallel2D class exposes a static method For, very similar to System.Threading.Parallel.For that runs an action over a 2D domain:
csharp
[EntryPoint]
public static void Parallel2DStack(...)
{
Parallel2D.For(fromI,toI, fromJ,toJ, (i,j) =>
{
... // action to be executed for (i,j) domain
});
}
Effectively dicing the processing, the execution configuration needs to be modified with SetDistrib. Both X and Y dimensions are used.
```csharp
dim3 grid = new dim3(<nb blocks X>, <nb blocks Y>, <nb blocks Z - ignored>) ;
dim3 block = new dim3(<nb threads X>, <nb threads Y>, <nb threads Z>) ;
wrapper.SetDistrib (grid,block) ;
```
Modify 02-dice-gpu.cs to use a Parallel2D pattern. You may want to try different values for dimension X and Y.
Should you need, refer to the solution.
End of explanation
!cd 05-dice-gpu/hybrid ; nvprof --profile-child-processes --metrics local_load_transactions,local_store_transactions,gld_transactions,gst_transactions,l2_read_transactions,l2_write_transactions ./run.sh
Explanation: Profiling application
Profiling the application may be done with nvprof. The following execution box provides the command line. We are querying the following metrics:
- local_{load,store}transactions : to get the memory transactions on local memory
- {gld_gst}_transactions : to get the memory transactions to global memory - cache misses
- l2{read,write}_transactions : to get the memory transactions on the L2 cache
End of explanation
import platform
if platform.system() == "Windows" : # create directory on Windows
!mkdir output-06-regsort-gpu
if platform.system() == "Linux" : # create directory on Linux
!mkdir -p ./output-06-regsort-gpu
!hybridizer-cuda ./06-regsort-gpu/01-regsort-gpu.cs graybitmap.cs -o ./06-regsort-gpu/01-regsort-gpu.exe -run
# convert bmp to png to have interactive display
from PIL import Image
img = Image.open('./output-06-regsort-gpu/denoised.bmp')
img.save('./output-06-regsort-gpu/denoised.png', 'png')
from IPython.display import Image
Image(filename="./output-06-regsort-gpu/denoised.png", width=384, height=384)
!cd 06-regsort-gpu/hybrid ; nvprof --profile-child-processes --metrics local_load_transactions,local_store_transactions,gld_transactions,gst_transactions,l2_read_transactions,l2_write_transactions ./run.sh
Explanation: Static Sort
We can see that the number of local transactions are many: the local stack buffer has not been placed in registers. Indeed, when sorting the buffer, the size of the buffer is not known statically. We may change this with an alternate sort implementation: a static sort.
We can use a static sort template class with an intrinsic type:
csharp
public class StaticSort
{
[IntrinsicFunction("::hybridizer::StaticSort<49>::sort<uint16_t>")]
public static void Sort(ushort[] data)
{
Array.Sort(data);
}
}
In the above code, when calling method Sort running C# with the dot net virtual machine, the Array.Sort method gets called. When Hybridizer processes this method, it replaces the call to StaticSort.Sort by a call to the intrinsic function with same parameters, here: ::hybridizer::StaticSort<49>::sort<uint16_t>, which is a template method of a template type.
Modify 01-regsort-gpu.cs to make use of static sort instead of the bitonic sort.
Should you need, refer to the solution.
End of explanation
!cd 06-regsort-gpu/hybrid ; nvprof -s --profile-child-processes ./run.sh
Explanation: Local load and store transactions should have been reduced to zero.
The display of kernel time actually includes some memory transfer and synchronization. The real execution time is returned by the profiler when run in summary mode.
End of explanation
import platform
if platform.system() == "Windows" : # create directory on Windows
!mkdir output-07-cache-aware-gpu
if platform.system() == "Linux" : # create directory on Linux
!mkdir -p ./output-07-cache-aware-gpu
!hybridizer-cuda ./07-cache-aware-gpu/01-cache-aware-gpu.cs -intrinsics intrinsics.cuh=./ graybitmap.cs -o ./07-cache-aware-gpu/01-cache-aware-gpu.exe -run
# convert bmp to png to have interactive display
from PIL import Image
img = Image.open('./output-07-cache-aware-gpu/denoised.bmp')
img.save('./output-07-cache-aware-gpu/denoised.png', 'png')
from IPython.display import Image
Image(filename="./output-07-cache-aware-gpu/denoised.png", width=384, height=384)
!cd 07-cache-aware-gpu/hybrid ; nvprof --profile-child-processes --metrics local_load_transactions,local_store_transactions,gld_transactions,gst_transactions,l2_read_transactions,l2_write_transactions ./run.sh
Explanation: Cache Coherence
GPUs are equiped with L1 and L2 caches, which are used automatically. There are other cache types such as the texture cache or the constant cache. All of these will help improve data locality.
In the context of this application, the traversal of data can be predicted. Instead of dicing the image, an have all threads load around fifty values (in our example), we can arrange calculations to reuse the previously loaded data, and recude the cache pressure from 49 down to 7.
For this, we take a bit more control on the parallel loop using explicit work distribution:
csharp
for (int blockJ = blockIdx.x * chunk; blockJ < height; blockJ += gridDim.x * chunk)
{
for (int blockI = threadIdx.x; blockI < width; blockI += blockDim.x)
{
<... process the shaft ...>
}
}
The block j index distributes column parts accross blocks, and block i distributes colmumns accross threads.
Each thread is in charge of a column part of the image, iterating on the processing line with 7 loads into our register table at each iteration.
Finish implementation of 01-cache-aware-gpu.cs. intrinsics.cuh defined an intrinsic type for the rolling buffer.
Should you need, refer to the solution.
End of explanation
!cd 07-cache-aware-gpu/hybrid ; nvprof -s --profile-child-processes ./run.sh
Explanation: The number of gld_transactions and l2_read_transactions should significantly reduce.
End of explanation
!cd 07-cache-aware-gpu/hybrid ; nvprof --profile-child-processes --metrics ldst_fu_utilization,single_precision_fu_utilization ./run.sh
Explanation: It is also possible to query the utilization of the different pipelines. In this last version, we should have a high utilization on the single_precision_fu_utilization, and low utilization for the other units. We are compute bound !
End of explanation |
8,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inference
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: Whenever people compare Bayesian inference with conventional approaches, one of the questions that comes up most often is something like, "What about p-values?"
And one of the most common examples is the comparison of two groups to see if there is a difference in their means.
In classical statistical inference, the usual tool for this scenario is a Student's t-test, and the result is a p-value.
This process is an example of null hypothesis significance testing.
A Bayesian alternative is to compute the posterior distribution of the difference between the groups.
Then we can use that distribution to answer whatever questions we are interested in, including the most likely size of the difference, a credible interval that's likely to contain the true difference, the probability of superiority, or the probability that the difference exceeds some threshold.
To demonstrate this process, I'll solve a problem borrowed from a statistical textbook
Step2: I'll use Pandas to load the data into a DataFrame.
Step3: The Treatment column indicates whether each student was in the treated or control group.
The Response is their score on the test.
I'll use groupby to separate the data for the Treated and Control groups
Step4: Here are CDFs of the scores for the two groups and summary statistics.
Step6: There is overlap between the distributions, but it looks like the scores are higher in the treated group.
The distribution of scores is not exactly normal for either group, but it is close enough that the normal model is a reasonable choice.
So I'll assume that in the entire population of students (not just the ones in the experiment), the distribution of scores is well modeled by a normal distribution with unknown mean and standard deviation.
I'll use mu and sigma to denote these unknown parameters,
and we'll do a Bayesian update to estimate what they are.
Estimating Parameters
As always, we need a prior distribution for the parameters.
Since there are two parameters, it will be a joint distribution.
I'll construct it by choosing marginal distributions for each parameter and computing their outer product.
As a simple starting place, I'll assume that the prior distributions for mu and sigma are uniform.
The following function makes a Pmf object that represents a uniform distribution.
Step7: make_uniform takes as parameters
An array of quantities, qs, and
A string, name, which is assigned to the index so it appears when we display the Pmf.
Here's the prior distribution for mu
Step8: I chose the lower and upper bounds by trial and error.
I'll explain how when we look at the posterior distribution.
Here's the prior distribution for sigma
Step9: Now we can use make_joint to make the joint prior distribution.
Step10: And we'll start by working with the data from the control group.
Step11: In the next section we'll compute the likelihood of this data for each pair of parameters in the prior distribution.
Likelihood
We would like to know the probability of each score in the dataset for each hypothetical pair of values, mu and sigma.
I'll do that by making a 3-dimensional grid with values of mu on the first axis, values of sigma on the second axis, and the scores from the dataset on the third axis.
Step12: Now we can use norm.pdf to compute the probability density of each score for each hypothetical pair of parameters.
Step13: The result is a 3-D array. To compute likelihoods, I'll multiply these densities along axis=2, which is the axis of the data
Step14: The result is a 2-D array that contains the likelihood of the entire dataset for each hypothetical pair of parameters.
We can use this array to update the prior, like this
Step16: The result is a DataFrame that represents the joint posterior distribution.
The following function encapsulates these steps.
Step17: Here are the updates for the control and treatment groups
Step18: And here's what they look like
Step19: Along the $x$-axis, it looks like the mean score for the treated group is higher.
Along the $y$-axis, it looks like the standard deviation for the treated group is lower.
If we think the treatment causes these differences, the data suggest that the treatment increases the mean of the scores and decreases their spread.
We can see these differences more clearly by looking at the marginal distributions for mu and sigma.
Posterior Marginal Distributions
I'll use marginal, which we saw in <<_MarginalDistributions>>, to extract the posterior marginal distributions for the population means.
Step20: Here's what they look like
Step21: In both cases the posterior probabilities at the ends of the range are near zero, which means that the bounds we chose for the prior distribution are wide enough.
Comparing the marginal distributions for the two groups, it looks like the population mean in the treated group is higher.
We can use prob_gt to compute the probability of superiority
Step22: There is a 98% chance that the mean in the treated group is higher.
Distribution of Differences
To quantify the magnitude of the difference between groups, we can use sub_dist to compute the distribution of the difference.
Step23: There are two things to be careful about when you use methods like sub_dist.
The first is that the result usually contains more elements than the original Pmf.
In this example, the original distributions have the same quantities, so the size increase is moderate.
Step24: In the worst case, the size of the result can be the product of the sizes of the originals.
The other thing to be careful about is plotting the Pmf.
In this example, if we plot the distribution of differences, the result is pretty noisy.
Step25: There are two ways to work around that limitation. One is to plot the CDF, which smooths out the noise
Step27: The other option is to use kernel density estimation (KDE) to make a smooth approximation of the PDF on an equally-spaced grid, which is what this function does
Step28: kde_from_pmf takes as parameters a Pmf and the number of places to evaluate the KDE.
It uses gaussian_kde, which we saw in <<_KernelDensityEstimation>>, passing the probabilities from the Pmf as weights.
This makes the estimated densities higher where the probabilities in the Pmf are higher.
Here's what the kernel density estimate looks like for the Pmf of differences between the groups.
Step29: The mean of this distribution is almost 10 points on a test where the mean is around 45, so the effect of the treatment seems to be substantial.
Step30: We can use credible_interval to compute a 90% credible interval.
Step31: Based on this interval, we are pretty sure the treatment improves test scores by 2 to 17 points.
Using Summary Statistics
In this example the dataset is not very big, so it doesn't take too long to compute the probability of every score under every hypothesis.
But the result is a 3-D array; for larger datasets, it might be too big to compute practically.
Also, with larger datasets the likelihoods get very small, sometimes so small that we can't compute them with floating-point arithmetic.
That's because we are computing the probability of a particular dataset; the number of possible datasets is astronomically big, so the probability of any of them is very small.
An alternative is to compute a summary of the dataset and compute the likelihood of the summary.
For example, if we compute the mean and standard deviation of the data, we can compute the likelihood of those summary statistics under each hypothesis.
As an example, suppose we know that the actual mean of the population, $\mu$, is 42 and the actual standard deviation, $\sigma$, is 17.
Step32: Now suppose we draw a sample from this distribution with sample size n=20, and compute the mean of the sample, which I'll call m, and the standard deviation of the sample, which I'll call s.
And suppose it turns out that
Step33: The summary statistics, m and s, are not too far from the parameters $\mu$ and $\sigma$, so it seems like they are not too unlikely.
To compute their likelihood, we will take advantage of three results from mathematical statistics
Step34: This is the "sampling distribution of the mean".
We can use it to compute the likelihood of the observed value of m, which is 41.
Step35: Now let's compute the likelihood of the observed value of s, which is 18.
First, we compute the transformed value t
Step36: Then we create a chi2 object to represent the distribution of t
Step37: Now we can compute the likelihood of t
Step38: Finally, because m and s are independent, their joint likelihood is the product of their likelihoods
Step39: Now we can compute the likelihood of the data for any values of $\mu$ and $\sigma$, which we'll use in the next section to do the update.
Checking
Step40: Update with Summary Statistics
Now we're ready to do an update.
I'll compute summary statistics for the two groups.
Step41: The result is a dictionary that maps from group name to a tuple that contains the sample size, n, the sample mean, m, and the sample standard deviation s, for each group.
I'll demonstrate the update with the summary statistics from the control group.
Step42: I'll make a mesh with hypothetical values of mu on the x axis and values of sigma on the y axis.
Step43: Now we can compute the likelihood of seeing the sample mean, m, for each pair of parameters.
Step44: And we can compute the likelihood of the sample standard deviation, s, for each pair of parameters.
Step45: Finally, we can do the update with both likelihoods
Step47: To compute the posterior distribution for the treatment group, I'll put the previous steps in a function
Step48: Here's the update for the treatment group
Step49: And here are the results.
Step50: Visually, these posterior joint distributions are similar to the ones we computed using the entire dataset, not just the summary statistics.
But they are not exactly the same, as we can see by comparing the marginal distributions.
Comparing Marginals
Again, let's extract the marginal posterior distributions.
Step51: And compare them to results we got using the entire dataset (the dashed lines).
Step52: The posterior distributions based on summary statistics are similar to the posteriors we computed using the entire dataset, but in both cases they are shorter and a little wider.
That's because the update with summary statistics is based on the implicit assumption that the distribution of the data is normal.
But it's not; as a result, when we replace the dataset with the summary statistics, we lose some information about the true distribution of the data.
With less information, we are less certain about the parameters.
Proof By Simulation
The update with summary statistics is based on theoretical distributions, and it seems to work, but I think it is useful to test theories like this, for a few reasons
Step53: I'll create a norm object to represent this distribution.
Step54: norm provides rvs, which generates random values from the distribution.
We can use it to simulate 1000 samples, each with sample size n=20.
Step55: The result is an array with 1000 rows, each containing a sample or 20 simulated test scores.
If we compute the mean of each row, the result is an array that contains 1000 sample means; that is, each value is the mean of a sample with n=20.
Step57: Now, let's compare the distribution of these means to dist_m.
I'll use pmf_from_dist to make a discrete approximation of dist_m
Step58: pmf_from_dist takes an object representing a continuous distribution, evaluates its probability density function at equally space points between low and high, and returns a normalized Pmf that approximates the distribution.
I'll use it to evaluate dist_m over a range of six standard deviations.
Step59: Now let's compare this theoretical distribution to the means of the samples.
I'll use kde_from_sample to estimate their distribution and evaluate it in the same locations as pmf_m.
Step60: The following figure shows the two distributions.
Step61: The theoretical distribution and the distribution of sample means are in accord.
Checking Standard Deviation
Let's also check that the standard deviations follow the distribution we expect.
First I'll compute the standard deviation for each of the 1000 samples.
Step62: Now we'll compute the transformed values, $t = n s^2 / \sigma^2$.
Step63: We expect the transformed values to follow a chi-square distribution with parameter $n-1$.
SciPy provides chi2, which we can use to represent this distribution.
Step64: We can use pmf_from_dist again to make a discrete approximation.
Step65: And we'll use kde_from_sample to estimate the distribution of the sample standard deviations.
Step66: Now we can compare the theoretical distribution to the distribution of the standard deviations.
Step67: The distribution of transformed standard deviations agrees with the theoretical distribution.
Finally, to confirm that the sample means and standard deviations are independent, I'll compute their coefficient of correlation
Step68: Their correlation is near zero, which is consistent with their being independent.
So the simulations confirm the theoretical results we used to do the update with summary statistics.
We can also use kdeplot from Seaborn to see what their joint distribution looks like.
Step69: It looks like the axes of the ellipses are aligned with the axes, which indicates that the variables are independent.
Summary
In this chapter we used a joint distribution to represent prior probabilities for the parameters of a normal distribution, mu and sigma.
And we updated that distribution two ways
Step71: Exercise
Step72: Here's how we can use it to sample pairs from the posterior distributions for the two groups.
Step74: The result is an array of tuples, where each tuple contains a possible pair of values for $\mu$ and $\sigma$.
Now you can loop through the samples, compute the Cohen effect size for each, and estimate the distribution of effect sizes.
Step75: Exercise
Step77: Exercise | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
Explanation: Inference
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/data/drp_scores.csv')
Explanation: Whenever people compare Bayesian inference with conventional approaches, one of the questions that comes up most often is something like, "What about p-values?"
And one of the most common examples is the comparison of two groups to see if there is a difference in their means.
In classical statistical inference, the usual tool for this scenario is a Student's t-test, and the result is a p-value.
This process is an example of null hypothesis significance testing.
A Bayesian alternative is to compute the posterior distribution of the difference between the groups.
Then we can use that distribution to answer whatever questions we are interested in, including the most likely size of the difference, a credible interval that's likely to contain the true difference, the probability of superiority, or the probability that the difference exceeds some threshold.
To demonstrate this process, I'll solve a problem borrowed from a statistical textbook: evaluating the effect of an educational "treatment" compared to a control.
Improving Reading Ability
We'll use data from a Ph.D. dissertation in educational psychology written in 1987, which was used as an example in a statistics textbook from 1989 and published on DASL, a web page that collects data stories.
Here's the description from DASL:
An educator conducted an experiment to test whether new directed reading activities in the classroom will help elementary school pupils improve some aspects of their reading ability. She arranged for a third grade class of 21 students to follow these activities for an 8-week period. A control classroom of 23 third graders followed the same curriculum without the activities. At the end of the 8 weeks, all students took a Degree of Reading Power (DRP) test, which measures the aspects of reading ability that the treatment is designed to improve.
The dataset is available here.
The following cell downloads the data.
End of explanation
import pandas as pd
df = pd.read_csv('drp_scores.csv', skiprows=21, delimiter='\t')
df.head(3)
Explanation: I'll use Pandas to load the data into a DataFrame.
End of explanation
grouped = df.groupby('Treatment')
responses = {}
for name, group in grouped:
responses[name] = group['Response']
Explanation: The Treatment column indicates whether each student was in the treated or control group.
The Response is their score on the test.
I'll use groupby to separate the data for the Treated and Control groups:
End of explanation
from empiricaldist import Cdf
from utils import decorate
for name, response in responses.items():
cdf = Cdf.from_seq(response)
cdf.plot(label=name)
decorate(xlabel='Score',
ylabel='CDF',
title='Distributions of test scores')
Explanation: Here are CDFs of the scores for the two groups and summary statistics.
End of explanation
from empiricaldist import Pmf
def make_uniform(qs, name=None, **options):
Make a Pmf that represents a uniform distribution.
pmf = Pmf(1.0, qs, **options)
pmf.normalize()
if name:
pmf.index.name = name
return pmf
Explanation: There is overlap between the distributions, but it looks like the scores are higher in the treated group.
The distribution of scores is not exactly normal for either group, but it is close enough that the normal model is a reasonable choice.
So I'll assume that in the entire population of students (not just the ones in the experiment), the distribution of scores is well modeled by a normal distribution with unknown mean and standard deviation.
I'll use mu and sigma to denote these unknown parameters,
and we'll do a Bayesian update to estimate what they are.
Estimating Parameters
As always, we need a prior distribution for the parameters.
Since there are two parameters, it will be a joint distribution.
I'll construct it by choosing marginal distributions for each parameter and computing their outer product.
As a simple starting place, I'll assume that the prior distributions for mu and sigma are uniform.
The following function makes a Pmf object that represents a uniform distribution.
End of explanation
import numpy as np
qs = np.linspace(20, 80, num=101)
prior_mu = make_uniform(qs, name='mean')
Explanation: make_uniform takes as parameters
An array of quantities, qs, and
A string, name, which is assigned to the index so it appears when we display the Pmf.
Here's the prior distribution for mu:
End of explanation
qs = np.linspace(5, 30, num=101)
prior_sigma = make_uniform(qs, name='std')
Explanation: I chose the lower and upper bounds by trial and error.
I'll explain how when we look at the posterior distribution.
Here's the prior distribution for sigma:
End of explanation
from utils import make_joint
prior = make_joint(prior_mu, prior_sigma)
Explanation: Now we can use make_joint to make the joint prior distribution.
End of explanation
data = responses['Control']
data.shape
Explanation: And we'll start by working with the data from the control group.
End of explanation
mu_mesh, sigma_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
mu_mesh.shape
Explanation: In the next section we'll compute the likelihood of this data for each pair of parameters in the prior distribution.
Likelihood
We would like to know the probability of each score in the dataset for each hypothetical pair of values, mu and sigma.
I'll do that by making a 3-dimensional grid with values of mu on the first axis, values of sigma on the second axis, and the scores from the dataset on the third axis.
End of explanation
from scipy.stats import norm
densities = norm(mu_mesh, sigma_mesh).pdf(data_mesh)
densities.shape
Explanation: Now we can use norm.pdf to compute the probability density of each score for each hypothetical pair of parameters.
End of explanation
likelihood = densities.prod(axis=2)
likelihood.shape
Explanation: The result is a 3-D array. To compute likelihoods, I'll multiply these densities along axis=2, which is the axis of the data:
End of explanation
from utils import normalize
posterior = prior * likelihood
normalize(posterior)
posterior.shape
Explanation: The result is a 2-D array that contains the likelihood of the entire dataset for each hypothetical pair of parameters.
We can use this array to update the prior, like this:
End of explanation
def update_norm(prior, data):
Update the prior based on data.
mu_mesh, sigma_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
densities = norm(mu_mesh, sigma_mesh).pdf(data_mesh)
likelihood = densities.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
Explanation: The result is a DataFrame that represents the joint posterior distribution.
The following function encapsulates these steps.
End of explanation
data = responses['Control']
posterior_control = update_norm(prior, data)
data = responses['Treated']
posterior_treated = update_norm(prior, data)
Explanation: Here are the updates for the control and treatment groups:
End of explanation
import matplotlib.pyplot as plt
from utils import plot_contour
plot_contour(posterior_control, cmap='Blues')
plt.text(49.5, 18, 'Control', color='C0')
cs = plot_contour(posterior_treated, cmap='Oranges')
plt.text(57, 12, 'Treated', color='C1')
decorate(xlabel='Mean (mu)',
ylabel='Standard deviation (sigma)',
title='Joint posterior distributions of mu and sigma')
Explanation: And here's what they look like:
End of explanation
from utils import marginal
pmf_mean_control = marginal(posterior_control, 0)
pmf_mean_treated = marginal(posterior_treated, 0)
Explanation: Along the $x$-axis, it looks like the mean score for the treated group is higher.
Along the $y$-axis, it looks like the standard deviation for the treated group is lower.
If we think the treatment causes these differences, the data suggest that the treatment increases the mean of the scores and decreases their spread.
We can see these differences more clearly by looking at the marginal distributions for mu and sigma.
Posterior Marginal Distributions
I'll use marginal, which we saw in <<_MarginalDistributions>>, to extract the posterior marginal distributions for the population means.
End of explanation
pmf_mean_control.plot(label='Control')
pmf_mean_treated.plot(label='Treated')
decorate(xlabel='Population mean (mu)',
ylabel='PDF',
title='Posterior distributions of mu')
Explanation: Here's what they look like:
End of explanation
Pmf.prob_gt(pmf_mean_treated, pmf_mean_control)
Explanation: In both cases the posterior probabilities at the ends of the range are near zero, which means that the bounds we chose for the prior distribution are wide enough.
Comparing the marginal distributions for the two groups, it looks like the population mean in the treated group is higher.
We can use prob_gt to compute the probability of superiority:
End of explanation
pmf_diff = Pmf.sub_dist(pmf_mean_treated, pmf_mean_control)
Explanation: There is a 98% chance that the mean in the treated group is higher.
Distribution of Differences
To quantify the magnitude of the difference between groups, we can use sub_dist to compute the distribution of the difference.
End of explanation
len(pmf_mean_treated), len(pmf_mean_control), len(pmf_diff)
Explanation: There are two things to be careful about when you use methods like sub_dist.
The first is that the result usually contains more elements than the original Pmf.
In this example, the original distributions have the same quantities, so the size increase is moderate.
End of explanation
pmf_diff.plot()
decorate(xlabel='Difference in population means',
ylabel='PDF',
title='Posterior distribution of difference in mu')
Explanation: In the worst case, the size of the result can be the product of the sizes of the originals.
The other thing to be careful about is plotting the Pmf.
In this example, if we plot the distribution of differences, the result is pretty noisy.
End of explanation
cdf_diff = pmf_diff.make_cdf()
cdf_diff.plot()
decorate(xlabel='Difference in population means',
ylabel='CDF',
title='Posterior distribution of difference in mu')
Explanation: There are two ways to work around that limitation. One is to plot the CDF, which smooths out the noise:
End of explanation
from scipy.stats import gaussian_kde
def kde_from_pmf(pmf, n=101):
Make a kernel density estimate for a PMF.
kde = gaussian_kde(pmf.qs, weights=pmf.ps)
qs = np.linspace(pmf.qs.min(), pmf.qs.max(), n)
ps = kde.evaluate(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
Explanation: The other option is to use kernel density estimation (KDE) to make a smooth approximation of the PDF on an equally-spaced grid, which is what this function does:
End of explanation
kde_diff = kde_from_pmf(pmf_diff)
kde_diff.plot()
decorate(xlabel='Difference in means',
ylabel='PDF',
title='Posterior distribution of difference in mu')
Explanation: kde_from_pmf takes as parameters a Pmf and the number of places to evaluate the KDE.
It uses gaussian_kde, which we saw in <<_KernelDensityEstimation>>, passing the probabilities from the Pmf as weights.
This makes the estimated densities higher where the probabilities in the Pmf are higher.
Here's what the kernel density estimate looks like for the Pmf of differences between the groups.
End of explanation
pmf_diff.mean()
Explanation: The mean of this distribution is almost 10 points on a test where the mean is around 45, so the effect of the treatment seems to be substantial.
End of explanation
pmf_diff.credible_interval(0.9)
Explanation: We can use credible_interval to compute a 90% credible interval.
End of explanation
mu = 42
sigma = 17
Explanation: Based on this interval, we are pretty sure the treatment improves test scores by 2 to 17 points.
Using Summary Statistics
In this example the dataset is not very big, so it doesn't take too long to compute the probability of every score under every hypothesis.
But the result is a 3-D array; for larger datasets, it might be too big to compute practically.
Also, with larger datasets the likelihoods get very small, sometimes so small that we can't compute them with floating-point arithmetic.
That's because we are computing the probability of a particular dataset; the number of possible datasets is astronomically big, so the probability of any of them is very small.
An alternative is to compute a summary of the dataset and compute the likelihood of the summary.
For example, if we compute the mean and standard deviation of the data, we can compute the likelihood of those summary statistics under each hypothesis.
As an example, suppose we know that the actual mean of the population, $\mu$, is 42 and the actual standard deviation, $\sigma$, is 17.
End of explanation
n = 20
m = 41
s = 18
Explanation: Now suppose we draw a sample from this distribution with sample size n=20, and compute the mean of the sample, which I'll call m, and the standard deviation of the sample, which I'll call s.
And suppose it turns out that:
End of explanation
dist_m = norm(mu, sigma/np.sqrt(n))
Explanation: The summary statistics, m and s, are not too far from the parameters $\mu$ and $\sigma$, so it seems like they are not too unlikely.
To compute their likelihood, we will take advantage of three results from mathematical statistics:
Given $\mu$ and $\sigma$, the distribution of m is normal with parameters $\mu$ and $\sigma/\sqrt{n}$;
The distribution of $s$ is more complicated, but if we compute the transform $t = n s^2 / \sigma^2$, the distribution of $t$ is chi-squared with parameter $n-1$; and
According to Basu's theorem, m and s are independent.
So let's compute the likelihood of m and s given $\mu$ and $\sigma$.
First I'll create a norm object that represents the distribution of m.
End of explanation
like1 = dist_m.pdf(m)
like1
Explanation: This is the "sampling distribution of the mean".
We can use it to compute the likelihood of the observed value of m, which is 41.
End of explanation
t = n * s**2 / sigma**2
t
Explanation: Now let's compute the likelihood of the observed value of s, which is 18.
First, we compute the transformed value t:
End of explanation
from scipy.stats import chi2
dist_s = chi2(n-1)
Explanation: Then we create a chi2 object to represent the distribution of t:
End of explanation
like2 = dist_s.pdf(t)
like2
Explanation: Now we can compute the likelihood of t:
End of explanation
like = like1 * like2
like
Explanation: Finally, because m and s are independent, their joint likelihood is the product of their likelihoods:
End of explanation
samples = norm(mu, sigma).rvs((100000, n))
samples.shape
sample_s = samples.std(axis=1)
sample_s.shape
sample_t = n * sample_s**2 / sigma**2
from empiricaldist import Cdf
xs = np.linspace(1, 60, 101)
ys = dist_s.cdf(xs)
plt.plot(xs, ys, lw=1)
Cdf.from_seq(sample_t).plot(lw=1)
Explanation: Now we can compute the likelihood of the data for any values of $\mu$ and $\sigma$, which we'll use in the next section to do the update.
Checking
End of explanation
summary = {}
for name, response in responses.items():
summary[name] = len(response), response.mean(), response.std()
summary
Explanation: Update with Summary Statistics
Now we're ready to do an update.
I'll compute summary statistics for the two groups.
End of explanation
n, m, s = summary['Control']
Explanation: The result is a dictionary that maps from group name to a tuple that contains the sample size, n, the sample mean, m, and the sample standard deviation s, for each group.
I'll demonstrate the update with the summary statistics from the control group.
End of explanation
mus, sigmas = np.meshgrid(prior.columns, prior.index)
mus.shape
Explanation: I'll make a mesh with hypothetical values of mu on the x axis and values of sigma on the y axis.
End of explanation
like1 = norm(mus, sigmas/np.sqrt(n)).pdf(m)
like1.shape
Explanation: Now we can compute the likelihood of seeing the sample mean, m, for each pair of parameters.
End of explanation
ts = n * s**2 / sigmas**2
like2 = chi2(n-1).pdf(ts)
like2.shape
Explanation: And we can compute the likelihood of the sample standard deviation, s, for each pair of parameters.
End of explanation
posterior_control2 = prior * like1 * like2
normalize(posterior_control2)
Explanation: Finally, we can do the update with both likelihoods:
End of explanation
def update_norm_summary(prior, data):
Update a normal distribution using summary statistics.
n, m, s = data
mu_mesh, sigma_mesh = np.meshgrid(prior.columns, prior.index)
like1 = norm(mu_mesh, sigma_mesh/np.sqrt(n)).pdf(m)
like2 = chi2(n-1).pdf(n * s**2 / sigma_mesh**2)
posterior = prior * like1 * like2
normalize(posterior)
return posterior
Explanation: To compute the posterior distribution for the treatment group, I'll put the previous steps in a function:
End of explanation
data = summary['Treated']
posterior_treated2 = update_norm_summary(prior, data)
Explanation: Here's the update for the treatment group:
End of explanation
plot_contour(posterior_control2, cmap='Blues')
plt.text(49.5, 18, 'Control', color='C0')
cs = plot_contour(posterior_treated2, cmap='Oranges')
plt.text(57, 12, 'Treated', color='C1')
decorate(xlabel='Mean (mu)',
ylabel='Standard deviation (sigma)',
title='Joint posterior distributions of mu and sigma')
Explanation: And here are the results.
End of explanation
from utils import marginal
pmf_mean_control2 = marginal(posterior_control2, 0)
pmf_mean_treated2 = marginal(posterior_treated2, 0)
Explanation: Visually, these posterior joint distributions are similar to the ones we computed using the entire dataset, not just the summary statistics.
But they are not exactly the same, as we can see by comparing the marginal distributions.
Comparing Marginals
Again, let's extract the marginal posterior distributions.
End of explanation
pmf_mean_control.plot(color='C5', linestyle='dashed')
pmf_mean_control2.plot(label='Control')
pmf_mean_treated.plot(color='C5', linestyle='dashed')
pmf_mean_treated2.plot(label='Treated')
decorate(xlabel='Population mean',
ylabel='PDF',
title='Posterior distributions of mu')
Explanation: And compare them to results we got using the entire dataset (the dashed lines).
End of explanation
mu = 42
sigma = 17
Explanation: The posterior distributions based on summary statistics are similar to the posteriors we computed using the entire dataset, but in both cases they are shorter and a little wider.
That's because the update with summary statistics is based on the implicit assumption that the distribution of the data is normal.
But it's not; as a result, when we replace the dataset with the summary statistics, we lose some information about the true distribution of the data.
With less information, we are less certain about the parameters.
Proof By Simulation
The update with summary statistics is based on theoretical distributions, and it seems to work, but I think it is useful to test theories like this, for a few reasons:
It confirms that our understanding of the theory is correct,
It confirms that the conditions where we apply the theory are conditions where the theory holds,
It confirms that the implementation details are correct. For many distributions, there is more than one way to specify the parameters. If you use the wrong specification, this kind of testing will help you catch the error.
In this section I'll use simulations to show that the distribution of the sample mean and standard deviation is as I claimed.
But if you want to take my word for it, you can skip this section and the next.
Let's suppose that we know the actual mean and standard deviation of the population:
End of explanation
dist = norm(mu, sigma)
Explanation: I'll create a norm object to represent this distribution.
End of explanation
n = 20
samples = dist.rvs((1000, n))
samples.shape
Explanation: norm provides rvs, which generates random values from the distribution.
We can use it to simulate 1000 samples, each with sample size n=20.
End of explanation
sample_means = samples.mean(axis=1)
sample_means.shape
Explanation: The result is an array with 1000 rows, each containing a sample or 20 simulated test scores.
If we compute the mean of each row, the result is an array that contains 1000 sample means; that is, each value is the mean of a sample with n=20.
End of explanation
def pmf_from_dist(dist, low, high):
Make a discrete approximation of a continuous distribution.
dist: SciPy dist object
low: low end of range
high: high end of range
returns: normalized Pmf
qs = np.linspace(low, high, 101)
ps = dist.pdf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
Explanation: Now, let's compare the distribution of these means to dist_m.
I'll use pmf_from_dist to make a discrete approximation of dist_m:
End of explanation
low = dist_m.mean() - dist_m.std() * 3
high = dist_m.mean() + dist_m.std() * 3
pmf_m = pmf_from_dist(dist_m, low, high)
Explanation: pmf_from_dist takes an object representing a continuous distribution, evaluates its probability density function at equally space points between low and high, and returns a normalized Pmf that approximates the distribution.
I'll use it to evaluate dist_m over a range of six standard deviations.
End of explanation
from utils import kde_from_sample
qs = pmf_m.qs
pmf_sample_means = kde_from_sample(sample_means, qs)
Explanation: Now let's compare this theoretical distribution to the means of the samples.
I'll use kde_from_sample to estimate their distribution and evaluate it in the same locations as pmf_m.
End of explanation
pmf_m.plot(label='Theoretical distribution',
style=':', color='C5')
pmf_sample_means.plot(label='KDE of sample means')
decorate(xlabel='Mean score',
ylabel='PDF',
title='Distribution of the mean')
Explanation: The following figure shows the two distributions.
End of explanation
sample_stds = samples.std(axis=1)
sample_stds.shape
Explanation: The theoretical distribution and the distribution of sample means are in accord.
Checking Standard Deviation
Let's also check that the standard deviations follow the distribution we expect.
First I'll compute the standard deviation for each of the 1000 samples.
End of explanation
transformed = n * sample_stds**2 / sigma**2
Explanation: Now we'll compute the transformed values, $t = n s^2 / \sigma^2$.
End of explanation
from scipy.stats import chi2
dist_s = chi2(n-1)
Explanation: We expect the transformed values to follow a chi-square distribution with parameter $n-1$.
SciPy provides chi2, which we can use to represent this distribution.
End of explanation
low = 0
high = dist_s.mean() + dist_s.std() * 4
pmf_s = pmf_from_dist(dist_s, low, high)
Explanation: We can use pmf_from_dist again to make a discrete approximation.
End of explanation
qs = pmf_s.qs
pmf_sample_stds = kde_from_sample(transformed, qs)
Explanation: And we'll use kde_from_sample to estimate the distribution of the sample standard deviations.
End of explanation
pmf_s.plot(label='Theoretical distribution',
style=':', color='C5')
pmf_sample_stds.plot(label='KDE of sample std',
color='C1')
decorate(xlabel='Standard deviation of scores',
ylabel='PDF',
title='Distribution of standard deviation')
Explanation: Now we can compare the theoretical distribution to the distribution of the standard deviations.
End of explanation
np.corrcoef(sample_means, sample_stds)[0][1]
Explanation: The distribution of transformed standard deviations agrees with the theoretical distribution.
Finally, to confirm that the sample means and standard deviations are independent, I'll compute their coefficient of correlation:
End of explanation
import seaborn as sns
sns.kdeplot(x=sample_means, y=sample_stds)
decorate(xlabel='Mean (mu)',
ylabel='Standard deviation (sigma)',
title='Joint distribution of mu and sigma')
Explanation: Their correlation is near zero, which is consistent with their being independent.
So the simulations confirm the theoretical results we used to do the update with summary statistics.
We can also use kdeplot from Seaborn to see what their joint distribution looks like.
End of explanation
# Solution
pmf_std_control = marginal(posterior_control, 1)
pmf_std_treated = marginal(posterior_treated, 1)
# Solution
pmf_std_control.plot(label='Control')
pmf_std_treated.plot(label='Treated')
decorate(xlabel='Population standard deviation',
ylabel='PDF',
title='Posterior distributions of sigma')
# Solution
Pmf.prob_gt(pmf_std_control, pmf_std_treated)
# Solution
pmf_diff2 = Pmf.sub_dist(pmf_std_control, pmf_std_treated)
# Solution
pmf_diff2.mean()
# Solution
pmf_diff2.credible_interval(0.9)
# Solution
kde_from_pmf(pmf_diff2).plot()
decorate(xlabel='Difference in population standard deviation',
ylabel='PDF',
title='Posterior distributions of difference in sigma')
Explanation: It looks like the axes of the ellipses are aligned with the axes, which indicates that the variables are independent.
Summary
In this chapter we used a joint distribution to represent prior probabilities for the parameters of a normal distribution, mu and sigma.
And we updated that distribution two ways: first using the entire dataset and the normal PDF; then using summary statistics, the normal PDF, and the chi-square PDF.
Using summary statistics is computationally more efficient, but it loses some information in the process.
Normal distributions appear in many domains, so the methods in this chapter are broadly applicable. The exercises at the end of the chapter will give you a chance to apply them.
Exercises
Exercise: Looking again at the posterior joint distribution of mu and sigma, it seems like the standard deviation of the treated group might be lower; if so, that would suggest that the treatment is more effective for students with lower scores.
But before we speculate too much, we should estimate the size of the difference and see whether it might actually be 0.
Extract the marginal posterior distributions of sigma for the two groups.
What is the probability that the standard deviation is higher in the control group?
Compute the distribution of the difference in sigma between the two groups. What is the mean of this difference? What is the 90% credible interval?
End of explanation
def sample_joint(joint, size):
Draw a sample from a joint distribution.
joint: DataFrame representing a joint distribution
size: sample size
pmf = Pmf(joint.transpose().stack())
return pmf.choice(size)
Explanation: Exercise: An effect size is a statistic intended to quantify the magnitude of a phenomenon.
If the phenomenon is a difference in means between two groups, a common way to quantify it is Cohen's effect size, denoted $d$.
If the parameters for Group 1 are $(\mu_1, \sigma_1)$, and the
parameters for Group 2 are $(\mu_2, \sigma_2)$, Cohen's
effect size is
$$ d = \frac{\mu_1 - \mu_2}{(\sigma_1 + \sigma_2)/2} $$
Use the joint posterior distributions for the two groups to compute the posterior distribution for Cohen's effect size.
If we try enumerate all pairs from the two distributions, it takes too
long so we'll use random sampling.
The following function takes a joint posterior distribution and returns a sample of pairs.
It uses some features we have not seen yet, but you can ignore the details for now.
End of explanation
sample_treated = sample_joint(posterior_treated, 1000)
sample_treated.shape
sample_control = sample_joint(posterior_control, 1000)
sample_control.shape
Explanation: Here's how we can use it to sample pairs from the posterior distributions for the two groups.
End of explanation
# Solution
def cohen_effect(pair1, pair2):
Compute Cohen's effect size for difference in means.
pair1: tuple of (mu1, sigma1)
pair2: tuple of (mu2, sigma2)
return: float
mu1, sigma1 = pair1
mu2, sigma2 = pair2
sigma = (sigma1 + sigma2) / 2
return (mu1 - mu2) / sigma
# Solution
cohen_effect(sample_treated[0], sample_control[0])
# Solution
ds = []
for pair1, pair2 in zip(sample_treated, sample_control):
d = cohen_effect(pair1, pair2)
ds.append(d)
# Solution
cdf = Cdf.from_seq(ds)
cdf.plot()
decorate(xlabel='Cohen effect size',
ylabel='CDF',
title='Posterior distributions of effect size')
# Solution
cdf.mean()
# Solution
cdf.credible_interval(0.9)
Explanation: The result is an array of tuples, where each tuple contains a possible pair of values for $\mu$ and $\sigma$.
Now you can loop through the samples, compute the Cohen effect size for each, and estimate the distribution of effect sizes.
End of explanation
# Solution
# Based on trial and error, here's a range of
# values for the prior
hypos = np.linspace(1, 51, 101)
# Solution
# Here are the probabilities of a score greater than 90
# for each hypothetical value of sigma.
from scipy.stats import norm
pgt90 = norm(81, hypos).sf(90)
pgt90.shape
# Solution
# And here's the chance that 5 out of 25 people
# get a score greater than 90
from scipy.stats import binom
likelihood1 = binom(25, pgt90).pmf(5)
likelihood1.shape
# Solution
# Here's the first update
prior = Pmf(1, hypos)
posterior = prior * likelihood1
posterior.normalize()
# Solution
# Here's the first posterior.
posterior.plot()
decorate(xlabel='Standard deviation (sigma)',
ylabel='PMF',
title='Posterior distribution of sigma')
# Solution
# Here's the probability of a score greater than 60
pgt60s = norm(81, hypos).sf(60)
# Solution
# And here's the probability that all 25 students exceed 60
likelihood2 = pgt60s ** 25
# Solution
plt.plot(hypos, likelihood2)
decorate(xlabel='Standard deviation (sigma)',
ylabel='Likelihood',
title='Likelihood function')
# Solution
# Here's the posterior after both updates
prior = Pmf(1, hypos)
prior.normalize()
posterior2 = prior * likelihood1 * likelihood2
posterior2.normalize()
# Solution
posterior.plot(label='Posterior 1')
posterior2.plot(label='Posterior 2')
decorate(xlabel='Standard deviation (sigma)',
ylabel='PMF',
title='Posterior distribution of sigma')
# Solution
posterior.mean(), posterior2.mean()
# Solution
posterior2.credible_interval(0.9)
Explanation: Exercise: This exercise is inspired by a question that appeared on Reddit.
An instructor announces the results of an exam like this, "The average score on this exam was 81. Out of 25 students, 5 got more than 90, and I am happy to report that no one failed (got less than 60)."
Based on this information, what do you think the standard deviation of scores was?
You can assume that the distribution of scores is approximately normal. And let's assume that the sample mean, 81, is actually the population mean, so we only have to estimate sigma.
Hint: To compute the probability of a score greater than 90, you can use norm.sf, which computes the survival function, also known as the complementary CDF, or 1 - cdf(x).
End of explanation
def get_posterior_cv(joint):
Get the posterior distribution of CV.
joint: joint distribution of mu and sigma
returns: Pmf representing the smoothed posterior distribution
pmf_mu = marginal(joint, 0)
pmf_sigma = marginal(joint, 1)
pmf_cv = Pmf.div_dist(pmf_sigma, pmf_mu)
return kde_from_pmf(pmf_cv)
# Solution
n = 154407
mean = 178
std = 8.27
# Solution
qs = np.linspace(mean-0.1, mean+0.1, num=101)
prior_mu = make_uniform(qs, name='mean')
qs = np.linspace(std-0.1, std+0.1, num=101)
prior_sigma = make_uniform(qs, name='std')
prior = make_joint(prior_mu, prior_sigma)
# Solution
data = n, mean, std
posterior_male = update_norm_summary(prior, data)
plot_contour(posterior_male, cmap='Blues')
decorate(xlabel='Mean (mu)',
ylabel='Standard deviation (sigma)',
title='Joint distribution of mu and sigma')
# Solution
n = 254722
mean = 163
std = 7.75
# Solution
qs = np.linspace(mean-0.1, mean+0.1, num=101)
prior_mu = make_uniform(qs, name='mean')
qs = np.linspace(std-0.1, std+0.1, num=101)
prior_sigma = make_uniform(qs, name='std')
prior = make_joint(prior_mu, prior_sigma)
# Solution
data = n, mean, std
posterior_female = update_norm_summary(prior, data)
plot_contour(posterior_female, cmap='Oranges');
# Solution
pmf_cv_male = get_posterior_cv(posterior_male)
kde_from_pmf(pmf_cv_male).plot()
pmf_cv_female = get_posterior_cv(posterior_female)
kde_from_pmf(pmf_cv_female).plot()
decorate(xlabel='Coefficient of variation',
ylabel='PDF',
title='Posterior distributions of CV')
# Solution
ratio_cv = Pmf.div_dist(pmf_cv_female, pmf_cv_male)
ratio_cv.max_prob()
# Solution
ratio_cv.credible_interval(0.9)
Explanation: Exercise: The Variability Hypothesis is the observation that many physical traits are more variable among males than among females, in many species.
It has been a subject of controversy since the early 1800s, which suggests an exercise we can use to practice the methods in this chapter. Let's look at the distribution of heights for men and women in the U.S. and see who is more variable.
I used 2018 data from the CDC's Behavioral Risk Factor Surveillance System (BRFSS), which includes self-reported heights from 154 407 men and 254 722 women.
Here's what I found:
The average height for men is 178 cm; the average height for women is 163 cm. So men are taller on average; no surprise there.
For men the standard deviation is 8.27 cm; for women it is 7.75 cm. So in absolute terms, men's heights are more variable.
But to compare variability between groups, it is more meaningful to use the coefficient of variation (CV), which is the standard deviation divided by the mean. It is a dimensionless measure of variability relative to scale.
For men CV is 0.0465; for women it is 0.0475.
The coefficient of variation is higher for women, so this dataset provides evidence against the Variability Hypothesis. But we can use Bayesian methods to make that conclusion more precise.
Use these summary statistics to compute the posterior distribution of mu and sigma for the distributions of male and female height.
Use Pmf.div_dist to compute posterior distributions of CV.
Based on this dataset and the assumption that the distribution of height is normal, what is the probability that the coefficient of variation is higher for men?
What is the most likely ratio of the CVs and what is the 90% credible interval for that ratio?
Hint: Use different prior distributions for the two groups, and chose them so they cover all parameters with non-negligible probability.
Also, you might find this function helpful:
End of explanation |
8,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyQuiver
This is an IPython Notebook interface for the PyQuiver package. The code below will guide you through using PyQuiver through a native Python interface. The same steps could be reproduced in the Python interpreter or in a .py file.
Step1: A Simple Calculation
We can reproduce the command-line interface by creating a KIE_Calculation object
Step2: To print the KIEs
Step3: We can also access the KIEs directly via the KIE dictionary belonging to our KIE_Calculation. This can be useful for automating KIE analyses over a large number of files. This functionality is further developed in the autoquiver routine (see below).
Step4: System objects
We didn't have to specify file paths as the targets of our KIE_Calculation constructor. Instead, we can work directly with System objects, which contain the geometry and Hessian as fields. Below, we load the Claisen ground state and transition state and print the position of the first atom in the ground state and print the first row of the transition state Hessian.
Step5: Running calculations with System objects
To run a KIE calculation with two System objects, we simply pass them into the relevant fields of a KIE_Calculation object
Step6: Building Isotopologues
Once we have access to the underlying System objects, it is easy to make substituted Isotopologues and perform frequency calculations ourselves. To make an Isotopologue we need to provide a name, a corresponding System, and a list of masses - one for each atom in the System.
Let's build the default light ground state Isotopologue
Step7: Isotopologues with substitutions
Now that we know how to make Isotopologues it's very easy to specify isotopic substitutions. Let's put the mysterious isotope carbon-5000 at atom 4
Step8: Calculating Reduced Isotopic Partition Function Ratios
KIEs are essentially ratios of reduced isotopic partition functions (RPFRs). To calculate these, we use the function calculate_rpfr from the kie module. This function takes a tuple of the form (light, heavy), a frequency threshold, a scaling factor, and a temperature. (All of these are discussed in detail in the README.)
calculate_rpfr returns four values in a tuple
Step9: Calculating KIEs from Isotopologues
It's nice to be able to calculate RPFRs by hand, but there is also an object available to calculate KIEs automatically
Step10: Displaying KIEs
Now, just like before, we have access to a KIE object (earlier we pulled them from the KIES dictionary). These can be printed prettily or you can take the value directly. As expected, we have large normal isotope effects because we used a carbon-5000 nucleus! (The three numbers correspond to the uncorrected, Wigner, and Bell KIEs.)
Step11: PyQuiver with multiple files
Step12: Let's test this on all the filenames in the example
Step13: Lo and behold! The function successfully separated the ground state and transition state files. We can similarly implement ts_p
Step14: gs_ts_match_p
Now all we have to do is write a function gs_ts_match_p that can take two filenames (one ground state and one transition state) and detect if they are matches. There are numerous ways to accomplish this. We will use a basic regular expression to highlight a common method for writing these functions. | Python Code:
# import the necessary package elements
import numpy as np
import sys
sys.path.append("../src")
from kie import KIE_Calculation
Explanation: PyQuiver
This is an IPython Notebook interface for the PyQuiver package. The code below will guide you through using PyQuiver through a native Python interface. The same steps could be reproduced in the Python interpreter or in a .py file.
End of explanation
claisen_calculation = KIE_Calculation("../tutorial/gaussian/claisen_demo.config", "../tutorial/gaussian/claisen_gs.out", "../tutorial/gaussian/claisen_ts.out", style="g09")
Explanation: A Simple Calculation
We can reproduce the command-line interface by creating a KIE_Calculation object:
End of explanation
print("The claisen_calculation object:")
print(claisen_calculation)
Explanation: To print the KIEs:
End of explanation
print("Iterating through the KIEs dictionary:")
for name,kie_object in claisen_calculation.KIES.items():
# pretty print the underlying KIE object:
print(kie_object)
# or pull the name and value directly:
print(type(kie_object))
print(kie_object.name)
print(kie_object.value)
break
Explanation: We can also access the KIEs directly via the KIE dictionary belonging to our KIE_Calculation. This can be useful for automating KIE analyses over a large number of files. This functionality is further developed in the autoquiver routine (see below).
End of explanation
from quiver import System
gs = System("../tutorial/gaussian/claisen_gs.out")
ts = System("../tutorial/gaussian/claisen_ts.out")
print("Position of atom 0 in the ground state:")
print(gs.positions[0])
print("First row of the Carteisan transition state Hessian:")
print(ts.hessian[0])
Explanation: System objects
We didn't have to specify file paths as the targets of our KIE_Calculation constructor. Instead, we can work directly with System objects, which contain the geometry and Hessian as fields. Below, we load the Claisen ground state and transition state and print the position of the first atom in the ground state and print the first row of the transition state Hessian.
End of explanation
claisen_calculation2 = KIE_Calculation("../tutorial/gaussian/claisen_demo.config", gs, ts)
print(claisen_calculation2)
Explanation: Running calculations with System objects
To run a KIE calculation with two System objects, we simply pass them into the relevant fields of a KIE_Calculation object:
End of explanation
from quiver import Isotopologue
from constants import DEFAULT_MASSES
# we build the default masses by using the DEFAULT_MASSES dictionary which maps from atomic numbers to masses
# default masses in DEFAULT_MASSES are the average atomic weight of each element
gs_masses = [DEFAULT_MASSES[z] for z in gs.atomic_numbers]
gs_light = Isotopologue("the light ground state", gs, gs_masses)
print(gs_light)
Explanation: Building Isotopologues
Once we have access to the underlying System objects, it is easy to make substituted Isotopologues and perform frequency calculations ourselves. To make an Isotopologue we need to provide a name, a corresponding System, and a list of masses - one for each atom in the System.
Let's build the default light ground state Isotopologue:
End of explanation
# make a copy of gs_masses
sub_masses = list(gs_masses)
# index 3 corresponds to atom number 4
sub_masses[3] = 5000.0
gs_heavy = Isotopologue("the super heavy ground state", gs, sub_masses)
Explanation: Isotopologues with substitutions
Now that we know how to make Isotopologues it's very easy to specify isotopic substitutions. Let's put the mysterious isotope carbon-5000 at atom 4:
End of explanation
from kie import calculate_rpfr
gs_rpfr, gs_imag_ratio, gs_light_freqs, gs_heavy_freqs = calculate_rpfr((gs_light, gs_heavy), 50.0, 1.0, 273)
print(gs_rpfr)
print(gs_light_freqs)
Explanation: Calculating Reduced Isotopic Partition Function Ratios
KIEs are essentially ratios of reduced isotopic partition functions (RPFRs). To calculate these, we use the function calculate_rpfr from the kie module. This function takes a tuple of the form (light, heavy), a frequency threshold, a scaling factor, and a temperature. (All of these are discussed in detail in the README.)
calculate_rpfr returns four values in a tuple: the first value is the RPFR, the second value is a ratio of large imaginary frequencies (if present), the third value is the frequencies in the light isotopomer, and the fourth value is the frequencies in the heavy isotopomer.
To print the individual contributions to the RPFR, set quiver.DEBUG = True in quiver.py and restart the kernel.
End of explanation
ts_masses = [DEFAULT_MASSES[z] for z in gs.atomic_numbers]
ts_light = Isotopologue("the light transition state", ts, ts_masses)
print(ts_light)
print()
ts_sub_masses = list(ts_masses)
ts_sub_masses[3] = 5000.0
ts_heavy = Isotopologue("the heavy transition state", ts, ts_sub_masses)
print(ts_heavy)
gs_tuple = (gs_light, gs_heavy)
ts_tuple = (ts_light, ts_heavy)
from kie import KIE
# we make a KIE object using the gs and ts tuples from above for a calculation at 273 degrees K, with a scaling factor of 1.0, and a frequency threshold of 50 cm^-1
carbon5000_kie = KIE("Carbon 5000 at C4 KIE", gs_tuple, ts_tuple, 273, 1.0, 50.0)
Explanation: Calculating KIEs from Isotopologues
It's nice to be able to calculate RPFRs by hand, but there is also an object available to calculate KIEs automatically: the KIE class.
We need to create a KIE object by passing it a pair of ground states (light and heavy) and a pair of transition states (light and heavy) as well as some temperature information.
We will make the desired substitution at Carbon 4 in the transition state and use a KIE object to calculate the KIE.
End of explanation
print(carbon5000_kie)
print(carbon5000_kie.value)
Explanation: Displaying KIEs
Now, just like before, we have access to a KIE object (earlier we pulled them from the KIES dictionary). These can be printed prettily or you can take the value directly. As expected, we have large normal isotope effects because we used a carbon-5000 nucleus! (The three numbers correspond to the uncorrected, Wigner, and Bell KIEs.)
End of explanation
def gs_p(filename):
# convert to lowercase
filename = filename.lower()
# remove the _ character
filename = filename.replace("_", "")
# replace groundstate with gs
filename = filename.replace("groundstate", "gs")
# check if the modified filename begins with the string "gs"
if filename[:2] == "gs":
return True
else:
return False
Explanation: PyQuiver with multiple files: autoquiver.py
A common problem when computing KIEs is the need to screen a large number of levels of theory with numerous ground and transition state files. So far, we have only seen how to make a KIE_Calculation object that corresponds to a single configuration, ground state, and transition state.
The autoquiver module implements functionality to perform screening over a large number of files. Its command line use is described in depth in the README. Here we will see how to leverage Python to have more sophisticated uses for autoquiver. Suppose we have the following files that we want to use as inputs to PyQuiver:
```
config.config
ground_state1.out ts-1.out
gs1rotomer.out tsrotomer1.out
GROUNDSTATE2.out transition_state2.out
gs3.out ts3.out
```
We want to run PyQuiver on pair of files above in the same row using the configuration file config.config. The command line functionality of autoquiver is great when the file names have a consistent pattern, but here it's not obvious how to generate the proper pairings or detect which files are ground or transition states.
The Arguments
The function autoquiver (which is the only object available in the autoquiver module) requires the following arguments:
filepath: this is the target directory.
config_path: the location of the configuration file to use for all PyQuiver calculations
input_extension: the file extension for ground and transition state files (default=.out)
style: the style of the ground and transition state files (default=g09)
In the README, we discussed the command:
python src/autoquiver.py -e .output auto/ auto/substitutions.config gs ts -
The strings "gs" and "ts" were used to find ground and transition files. A ground state file needed to have "gs" as a substring and likewise for "gs" and transition state files. The character "-" was used as a field delimiter and "gs" and "ts" matched if all fields after the first were identical.
That is some sensible default behaviour, but for more complex cases, we can provide the following functions:
gs_p: a function that takes a filename and returns a boolean value. True if the filename corresponds to a ground state and False if it does not.
ts_p: a function that takes a filename and returns a boolean value. True if the filename corresponds to a transition state and False if it does not.
gs_ts_match_p: a function that takes a ground state filename and a transition state filename and returns a boolean value. True if the ground state and transition state match and Fase if they do not.
gs_p and ts_p
For the previous example, we need to detect that all of the following are ground state files:
ground_state1.out
gs1_rotomer.out
GROUNDSTATE2.out
gs3.out
We can accomplish this with the following Python function:
End of explanation
gs_filenames = ["ground_state1.out", "gs1_rotomer.out", "GROUNDSTATE2.out", "gs3.out"]
ts_filenames = ["ts-1.out", "tsrotomer1.out", "transition_state2.out", "ts3.out"]
print("Checking ground state matches:")
for n in gs_filenames:
print(gs_p(n))
print("Checking transition state matches:")
for n in ts_filenames:
print(gs_p(n))
Explanation: Let's test this on all the filenames in the example:
End of explanation
def ts_p(filename):
# convert to lowercase
filename = filename.lower()
# remove the _ character
filename = filename.replace("_", "")
# replace groundstate with gs
filename = filename.replace("transitionstate", "ts")
# check if the modified filename begins with the string "gs"
if filename[:2] == "ts":
return True
else:
return False
Explanation: Lo and behold! The function successfully separated the ground state and transition state files. We can similarly implement ts_p:
End of explanation
gs_filenames = ["ground_state1.out", "gs1_rotomer.out", "GROUNDSTATE2.out", "gs3.out"]
ts_filenames = ["ts-1.out", "tsrotomer1.out", "transition_state2.out", "ts3.out"]
import re
def gs_ts_p(gs_name, ts_name):
# a regular expression that finds and extracts the first integer substring in a filename
gs_match = re.search(".*([0-9]+).*", gs_name)
gs_number = gs_match.group(1)
ts_match = re.search(".*([0-9]+).*", ts_name)
ts_number = ts_match.group(1)
if ts_number == gs_number:
return True
else:
return False
import itertools
for gs_n, ts_n in itertools.product(gs_filenames, ts_filenames):
if gs_ts_p(gs_n, ts_n):
print("Match: {0} {1}".format(gs_n, ts_n))
# missing example of how to actually use autoquiver
Explanation: gs_ts_match_p
Now all we have to do is write a function gs_ts_match_p that can take two filenames (one ground state and one transition state) and detect if they are matches. There are numerous ways to accomplish this. We will use a basic regular expression to highlight a common method for writing these functions.
End of explanation |
8,325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute MxNE with time-frequency sparse prior
The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)
that promotes focal (sparse) sources (such as dipole fitting techniques)
Step1: Run solver
Step2: Plot dipole activations
Step3: Plot location of the strongest dipole with MRI slices
Step4: Show the evoked response and the residual for gradiometers
Step5: Generate stc from dipoles
Step6: View in 2D and 3D ("glass" brain like 3D plot) | Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de>
#
# License: BSD-3-Clause
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles
from mne.viz import (plot_sparse_source_estimates,
plot_dipole_locations, plot_dipole_amplitudes)
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-no-filter-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
# Read noise covariance matrix
cov = mne.read_cov(cov_fname)
# Handling average file
condition = 'Left visual'
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked = mne.pick_channels_evoked(evoked)
# We make the window slightly larger than what you'll eventually be interested
# in ([-0.05, 0.3]) to avoid edge effects.
evoked.crop(tmin=-0.1, tmax=0.4)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
Explanation: Compute MxNE with time-frequency sparse prior
The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)
that promotes focal (sparse) sources (such as dipole fitting techniques)
:footcite:GramfortEtAl2013b,GramfortEtAl2011.
The benefit of this approach is that:
it is spatio-temporal without assuming stationarity (sources properties
can vary over time)
activations are localized in space, time and frequency in one step.
with a built-in filtering process based on a short time Fourier
transform (STFT), data does not need to be low passed (just high pass
to make the signals zero mean).
the solver solves a convex optimization problem, hence cannot be
trapped in local minima.
End of explanation
# alpha parameter is between 0 and 100 (100 gives 0 active source)
alpha = 40. # general regularization parameter
# l1_ratio parameter between 0 and 1 promotes temporal smoothness
# (0 means no temporal regularization)
l1_ratio = 0.03 # temporal regularization parameter
loose, depth = 0.2, 0.9 # loose orientation & depth weighting
# Compute dSPM solution to be used as weights in MxNE
inverse_operator = make_inverse_operator(evoked.info, forward, cov,
loose=loose, depth=depth)
stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,
method='dSPM')
# Compute TF-MxNE inverse solution with dipole output
dipoles, residual = tf_mixed_norm(
evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio, loose=loose,
depth=depth, maxit=200, tol=1e-6, weights=stc_dspm, weights_min=8.,
debias=True, wsize=16, tstep=4, window=0.05, return_as_dipoles=True,
return_residual=True)
# Crop to remove edges
for dip in dipoles:
dip.crop(tmin=-0.05, tmax=0.3)
evoked.crop(tmin=-0.05, tmax=0.3)
residual.crop(tmin=-0.05, tmax=0.3)
Explanation: Run solver
End of explanation
plot_dipole_amplitudes(dipoles)
Explanation: Plot dipole activations
End of explanation
idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles])
plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample',
subjects_dir=subjects_dir, mode='orthoview',
idx='amplitude')
# # Plot dipole locations of all dipoles with MRI slices:
# for dip in dipoles:
# plot_dipole_locations(dip, forward['mri_head_t'], 'sample',
# subjects_dir=subjects_dir, mode='orthoview',
# idx='amplitude')
Explanation: Plot location of the strongest dipole with MRI slices
End of explanation
ylim = dict(grad=[-120, 120])
evoked.pick_types(meg='grad', exclude='bads')
evoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim,
proj=True, time_unit='s')
residual.pick_types(meg='grad', exclude='bads')
residual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim,
proj=True, time_unit='s')
Explanation: Show the evoked response and the residual for gradiometers
End of explanation
stc = make_stc_from_dipoles(dipoles, forward['src'])
Explanation: Generate stc from dipoles
End of explanation
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1, fig_name="TF-MxNE (cond %s)"
% condition, modes=['sphere'], scale_factors=[1.])
time_label = 'TF-MxNE time=%0.2f ms'
clim = dict(kind='value', lims=[10e-9, 15e-9, 20e-9])
brain = stc.plot('sample', 'inflated', 'rh', views='medial',
clim=clim, time_label=time_label, smoothing_steps=5,
subjects_dir=subjects_dir, initial_time=150, time_unit='ms')
brain.add_label("V1", color="yellow", scalar_thresh=.5, borders=True)
brain.add_label("V2", color="red", scalar_thresh=.5, borders=True)
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
End of explanation |
8,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean Variance - Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn Rate Tune - Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in differents font.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
xmin = np.min(image_data)
xmax = np.max(image_data)
valRange = b-a
denominator = xmax-xmin
return [a + ((x-xmin)*valRange)/denominator for x in image_data]
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean Variance - Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
# features =
# labels =
# TODO: Set the weights and biases tensors
# weights =
# biases =
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
# epochs =
# learning_rate =
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn Rate Tune - Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
8,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
From Command Line - Import CSV file (Raw Data) into MongoDB
mongoimport --db airbnb --type csv --file listings_new.csv -c listings_new
mongoimport --db airbnb --type csv --file barcelona_attractions.csv -c attractions
Step1: Connect Python to MongoDB
Step2: Retrieve from Database
Database named as "airbnb"
Step3: Retrieve Tables from Database
Step4: Store data in a pandas dataframe for further analysis
Step5: Convert numeric variables
Step6: Convert Amenities to Dummy Variables
Step7: Combine Attractions Data | Python Code:
import pymongo
from pymongo import MongoClient
Explanation: From Command Line - Import CSV file (Raw Data) into MongoDB
mongoimport --db airbnb --type csv --file listings_new.csv -c listings_new
mongoimport --db airbnb --type csv --file barcelona_attractions.csv -c attractions
End of explanation
client = MongoClient('mongodb://localhost:27017/')
Explanation: Connect Python to MongoDB
End of explanation
db = client.airbnb
Explanation: Retrieve from Database
Database named as "airbnb"
End of explanation
listings = db.listings_new
attractions = db.attractions
Explanation: Retrieve Tables from Database
End of explanation
import pandas as pd
listings_df = pd.DataFrame(list(db.listings_new.find()))
listings_df.head()
listings_df.columns.values
Explanation: Store data in a pandas dataframe for further analysis
End of explanation
listings_df = listings_df.convert_objects(convert_numeric=True)
listings_df['price'] = listings_df['price'].str[1:]
listings_df['price'] = listings_df.price.replace(',', '',regex=True)
listings_df['price'] = listings_df.price.astype(float).fillna(0.0)
listings_df['extra_people'] = listings_df['extra_people'].str[1:]
listings_df['extra_people'] = listings_df.extra_people.replace(',', '',regex=True).replace('', '0',regex=True)
listings_df['extra_people'] = listings_df.extra_people.astype(float).fillna(0.0)
listings_df['weekly_price'] = listings_df['weekly_price'].str[1:]
listings_df['weekly_price'] = listings_df.weekly_price.replace(',', '',regex=True).replace('', '0',regex=True)
listings_df['weekly_price'] = listings_df.weekly_price.astype(float).fillna(0.0)
listings_df['monthly_price'] = listings_df['monthly_price'].str[1:]
listings_df['monthly_price'] = listings_df.monthly_price.replace(',', '',regex=True).replace('', '0',regex=True)
listings_df['monthly_price'] = listings_df.monthly_price.astype(float).fillna(0.0)
listings_df['security_deposit'] = listings_df['security_deposit'].str[1:]
listings_df['security_deposit'] = listings_df.security_deposit.replace(',', '',regex=True).replace('', '0',regex=True)
listings_df['security_deposit'] = listings_df.security_deposit.astype(float).fillna(0.0)
listings_df['cleaning_fee'] = listings_df['cleaning_fee'].str[1:]
listings_df['cleaning_fee'] = listings_df.cleaning_fee.replace(',', '',regex=True).replace('', '0',regex=True)
listings_df['cleaning_fee'] = listings_df.cleaning_fee.astype(float).fillna(0.0)
Explanation: Convert numeric variables
End of explanation
listings_df['amenities_split'] = listings_df["amenities"].apply(lambda x: x[1:-1].split(','))
#Get unique amenities
unique_amenities = list(set(x for l in listings_df["amenities_split"] for x in l))
unique_amenities = unique_amenities[0:2] + unique_amenities[3:]
unique_amenities
num_col = len(unique_amenities) #number of columns
data_array = []
for n in range(0, len(listings_df)):
lst = []
for i in range (0, num_col):
row = listings_df["amenities_split"][n]
if unique_amenities[i] in row:
lst.append(1)
else:
lst.append(0)
data_array.append(lst)
df = pd.DataFrame(data_array, columns=unique_amenities)
listings_df2 = listings_df.join(df)
listings_df2.head()
Explanation: Convert Amenities to Dummy Variables
End of explanation
attractions = pd.DataFrame(list(db.attractions.find()))
attractions.head()
#Calculate distance between 2 lat long points
#Returns distance in km
def distance(lat1, long1, lat2, long2):
from math import sin, cos, sqrt, atan2, radians
# approximate radius of earth in km
R = 6373.0
lat1 = radians(lat1)
long1 = radians(long1)
lat2 = radians(lat2)
long2 = radians(long2)
dlong = long2 - long1
dlat = lat2 - lat1
a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlong / 2)**2
c = 2 * atan2(sqrt(a), sqrt(1 - a))
distance = R * c
return distance
for n in range(0, len(listings_df2)):
nearest_attr = attractions['attraction'][0]
nearest_attr_rating = attractions['rating'][0]
nearest_attr_lat = attractions['lat'][0]
nearest_attr_long = attractions['long'][0]
list_lat = listings_df2['latitude'][n]
list_long = listings_df2['longitude'][n]
#Distance from first attraction to listing
dist_nearest = distance(list_lat, list_long, nearest_attr_lat, nearest_attr_long)
for i in range(1, len(attractions)):
attr_lat = attractions['lat'][i]
attr_long = attractions['long'][i]
dist = distance(list_lat, list_long, attr_lat, attr_long)
if dist < dist_nearest:
nearest_attr = attractions['attraction'][i]
nearest_attr_rating = attractions['rating'][i]
nearest_attr_lat = attractions['lat'][i]
nearest_attr_long = attractions['long'][i]
dist_nearest = dist
listings_df2.loc[n, 'nearest_attr'] = nearest_attr
listings_df2.loc[n, 'nearest_attr_rating'] = nearest_attr_rating
listings_df2.loc[n, 'nearest_attr_lat'] = nearest_attr_lat
listings_df2.loc[n, 'nearest_attr_long'] = nearest_attr_long
listings_df2.loc[n, 'nearest_attr_dist'] = dist_nearest
listings_df2.head()
#listings_df2.to_csv("listings_31Mar.csv")
Explanation: Combine Attractions Data
End of explanation |
8,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 2
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step3: Learning a multiple regression model
Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features
Step4: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows
Step5: Making Predictions
In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.
Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above
Step6: Compute RSS
Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
Step7: Test your function by computing the RSS on TEST data for the example model
Step8: Create some new features
Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
You will use the logarithm function to create a new feature. so first you should import it from the math library.
Step9: Next create the following 4 new features as column in both TEST and TRAIN data
Step10: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large.
Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.
Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)
Quiz Question
Step11: Learning Multiple Models
Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more
Step12: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients
Step13: Quiz Question
Step14: Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 2: Multiple Regression (Interpretation)
The goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions.
In this notebook you will use data on house sales in King County to predict prices using multiple regression. You will:
* Use SFrames to do some feature engineering
* Use built-in graphlab functions to compute the regression weights (coefficients/parameters)
* Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares
* Look at coefficients and interpret their meanings
* Evaluate multiple models via RSS
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('../Data/kc_house_data.gl/')
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features,
validation_set = None)
Explanation: Learning a multiple regression model
Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features:
example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code:
(Aside: We set validation_set = None to ensure that the results are always the same)
End of explanation
example_weight_summary = example_model.get("coefficients")
print example_weight_summary
Explanation: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:
End of explanation
example_predictions = example_model.predict(train_data)
print example_predictions[0] # should be 271789.505878
Explanation: Making Predictions
In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.
Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:
End of explanation
def get_residual_sum_of_squares(model, data, outcome):
# First get the predictions
predictions = model.predict(data)
# Then compute the residuals/errors
residuals = predictions - outcome
# Then square and add them up
RSS = sum(pow(residuals,2))
return(RSS)
Explanation: Compute RSS
Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
End of explanation
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])
print rss_example_train # should be 2.7376153833e+14
Explanation: Test your function by computing the RSS on TEST data for the example model:
End of explanation
import numpy as np
train_data['bedrooms_squared'] = np.power(train_data['bedrooms'],2)
train_data['bed_bath_rooms'] = train_data['bedrooms']*train_data['bathrooms']
train_data['log_sqft_living'] = np.log(train_data['sqft_living'])
train_data['lat_plus_long'] = train_data['lat']+train_data['long']
train_data.head()
test_data['bedrooms_squared'] = np.power(test_data['bedrooms'],2)
test_data['bed_bath_rooms'] = test_data['bedrooms']*test_data['bathrooms']
test_data['log_sqft_living'] = np.log(test_data['sqft_living'])
test_data['lat_plus_long'] = test_data['lat']+test_data['long']
test_data.head()
#from math import log
Explanation: Create some new features
Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
You will use the logarithm function to create a new feature. so first you should import it from the math library.
End of explanation
#train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)
#test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)
# create the remaining 3 features in both TEST and TRAIN data
Explanation: Next create the following 4 new features as column in both TEST and TRAIN data:
* bedrooms_squared = bedrooms*bedrooms
* bed_bath_rooms = bedrooms*bathrooms
* log_sqft_living = log(sqft_living)
* lat_plus_long = lat + long
As an example here's the first one:
End of explanation
print 'mean bedrooms_squared == ' + str(np.round(test_data['bedrooms_squared'].mean(),2))
print 'mean bed_bath_rooms == ' + str(np.round(test_data['bed_bath_rooms'].mean(),2))
print 'mean log_sqft_living == ' + str(np.round(test_data['log_sqft_living'].mean(),2))
print 'mean lat_plus_long == ' + str(np.round(test_data['lat_plus_long'].mean(),2))
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large.
Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.
Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)
Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)
End of explanation
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
Explanation: Learning Multiple Models
Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more:
* Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude
* Model 2: add bedrooms*bathrooms
* Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude
End of explanation
# Learn the three models: (don't forget to set validation_set = None)
model_1 = graphlab.linear_regression.create(train_data, target='price', features=model_1_features, validation_set=None)
model_2 = graphlab.linear_regression.create(train_data, target='price', features=model_2_features, validation_set=None)
model_3 = graphlab.linear_regression.create(train_data, target='price', features=model_3_features, validation_set=None)
# Examine/extract each model's coefficients:
print model_1.get("coefficients")
print model_2.get("coefficients")
print model_3.get("coefficients")
Explanation: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:
End of explanation
# Compute the RSS on TRAINING data for each of the three models and record the values:
print 'rss train model_1 == ' + str(get_residual_sum_of_squares(model_1, train_data, train_data['price']))
print 'rss train model_2 == ' + str(get_residual_sum_of_squares(model_2, train_data, train_data['price']))
print 'rss train model_3 == ' + str(get_residual_sum_of_squares(model_3, train_data, train_data['price']))
Explanation: Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1?
positive
Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2?
negative
Think about what this means.
Comparing multiple models
Now that you've learned three models and extracted the model weights we want to evaluate which model is best.
First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.
End of explanation
# Compute the RSS on TESTING data for each of the three models and record the values:
print 'rss test model_1 == ' + str(get_residual_sum_of_squares(model_1, test_data, test_data['price']))
print 'rss test model_2 == ' + str(get_residual_sum_of_squares(model_2, test_data, test_data['price']))
print 'rss test model_3 == ' + str(get_residual_sum_of_squares(model_3, test_data, test_data['price']))
Explanation: Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data? Is this what you expected?
3
Now compute the RSS on on TEST data for each of the three models.
End of explanation |
8,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hadoop Short Course
1. Hadoop Distributed File System
Hadoop Distributed File System (HDFS)
HDFS is the primary distributed storage used by Hadoop applications. A HDFS cluster primarily consists of a NameNode that manages the file system metadata and DataNodes that store the actual data. The HDFS Architecture Guide describes HDFS in detail. To learn more about the interaction of users and administrators with HDFS, please refer to HDFS User Guide.
All HDFS commands are invoked by the bin/hdfs script. Running the hdfs script without any arguments prints the description for all commands. For all the commands, please refer to HDFS Commands Reference
Start HDFS
Step1: Normal file operations and data preparation for later example
list recursively everything under the root dir
Download some files for later use. The files should already be there.
Step2: Delete existing folders under /user/ubuntu/ in hdfs
Create input folder
Step3: Test locally the mapper.py and reduce.py
Step4: List the output
Download the output file (part-00000) to local fs.
Step5: 3. Exercise
Step6: Verify Results
Copy the output file to local
run the following command, and compare with the downloaded output
sort -nrk 2,2 part-00000 | head -n 20
The wc1-part-00000 is the output of the previous wordcount (wordcount1) | Python Code:
hadoop_root = '/home/ubuntu/shortcourse/hadoop-2.7.1/'
hadoop_start_hdfs_cmd = hadoop_root + 'sbin/start-dfs.sh'
hadoop_stop_hdfs_cmd = hadoop_root + 'sbin/stop-dfs.sh'
# start the hadoop distributed file system
! {hadoop_start_hdfs_cmd}
# show the jave jvm process summary
# You should see NamenNode, SecondaryNameNode, and DataNode
! jps
Explanation: Hadoop Short Course
1. Hadoop Distributed File System
Hadoop Distributed File System (HDFS)
HDFS is the primary distributed storage used by Hadoop applications. A HDFS cluster primarily consists of a NameNode that manages the file system metadata and DataNodes that store the actual data. The HDFS Architecture Guide describes HDFS in detail. To learn more about the interaction of users and administrators with HDFS, please refer to HDFS User Guide.
All HDFS commands are invoked by the bin/hdfs script. Running the hdfs script without any arguments prints the description for all commands. For all the commands, please refer to HDFS Commands Reference
Start HDFS
End of explanation
# We will use three ebooks from Project Gutenberg for later example
# Pride and Prejudice by Jane Austen: http://www.gutenberg.org/ebooks/1342.txt.utf-8
! wget http://www.gutenberg.org/ebooks/1342.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/pride-and-prejudice.txt
# Alice's Adventures in Wonderland by Lewis Carroll: http://www.gutenberg.org/ebooks/11.txt.utf-8
! wget http://www.gutenberg.org/ebooks/11.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/alice.txt
# The Adventures of Sherlock Holmes by Arthur Conan Doyle: http://www.gutenberg.org/ebooks/1661.txt.utf-8
! wget http://www.gutenberg.org/ebooks/1661.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/sherlock-holmes.txt
Explanation: Normal file operations and data preparation for later example
list recursively everything under the root dir
Download some files for later use. The files should already be there.
End of explanation
Start the hadoop distributed file system
! {hadoop_root + 'sbin/start-yarn.sh'}
Explanation: Delete existing folders under /user/ubuntu/ in hdfs
Create input folder: /user/ubuntu/input
Copy the three books to the input folder in HDFS.
Similiar to normal bash cmd:
cp /home/ubuntu/shortcourse/data/wordcount/* /user/ubuntu/input/
but copy to hdfs.
Show if the files are there.
2. WordCount Example
Let's count the single word frequency in the uploaded three books.
Start Yarn, the resource allocator for Hadoop.
End of explanation
# wordcount 1 the scripts
# Map: /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py
# Test locally the map script
! echo "go gators gators beat everyone go glory gators" | \
/home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py
# Reduce: /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py
# Test locally the reduce script
! echo "go gators gators beat everyone go glory gators" | \
/home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py | \
sort -k1,1 | \
/home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py
# run them with Hadoop against the uploaded three books
cmd = hadoop_root + 'bin/hadoop jar ' + hadoop_root + 'hadoop-streaming-2.7.1.jar ' + \
'-input input ' + \
'-output output ' + \
'-mapper /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py ' + \
'-reducer /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py ' + \
'-file /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py ' + \
'-file /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py'
! {cmd}
Explanation: Test locally the mapper.py and reduce.py
End of explanation
# Let's see what's in the output file
# delete if previous results exist
! tail -n 20 $(THE_DOWNLOADED_FILE)
Explanation: List the output
Download the output file (part-00000) to local fs.
End of explanation
# 1. go to wordcount2 folder, modify the mapper
# 2. test locally if the mapper is working
# 3. run with hadoop streaming. Input is still the three books, output to 'output2'
Explanation: 3. Exercise: WordCount2
Count the single word frequency, where the words are given in a pattern file.
For example, given pattern.txt file, which contains:
"a b c d"
And the input file is:
"d e a c f g h i a b c d".
Then the output shoule be:
"a 1
b 1
c 2
d 2"
Please copy the mapper.py and reduce.py from the first wordcount example to foler "/home/ubuntu/shortcourse/notes/scripts/wordcount2/". The pattern file is given in the wordcount2 folder with name "wc2-pattern.txt"
Hint:
1. pass the pattern file using "-file option" and use -cmdenv to pass the file name as environment variable
2. in the mapper, read the pattern file into a set
3. only print out the words that exist in the set
End of explanation
# 1. list the output, download the output to local, and cat the output file
# 2. use bash cmd to find out the most frequently used 20 words from the previous example,
# and compare the results with this output
# stop dfs and yarn
!{hadoop_root + 'sbin/stop-yarn.sh'}
# don't stop hdfs for now, later use
# !{hadoop_stop_hdfs_cmd}
Explanation: Verify Results
Copy the output file to local
run the following command, and compare with the downloaded output
sort -nrk 2,2 part-00000 | head -n 20
The wc1-part-00000 is the output of the previous wordcount (wordcount1)
End of explanation |
8,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lake Model Solutions
Excercise 1
We begin by initializing the variables and import the necessary modules
Step1: Now construct the class containing the initial conditions of the problem
Step2: New legislation changes $\lambda$ to $0.2$
Step3: Now plot stocks
Step4: And how the rates evolve
Step5: We see that it takes 20 periods for the economy to converge to it's new steady state levels
Exercise 2
This next exercise has the economy expriencing a boom in entrances to the labor market and then later returning to the original levels. For 20 periods the economy has a new entry rate into the labor market
Step6: We simulate for 20 periods at the new parameters
Step7: Now using the state after 20 periods for the new initial conditions we simulate for the additional 30 periods
Step8: Finally we combine these two paths and plot
Step9: And the rates | Python Code:
%pylab inline
import LakeModel
alpha = 0.012
lamb = 0.2486
b = 0.001808
d = 0.0008333
g = b-d
N0 = 100.
e0 = 0.92
u0 = 1-e0
T = 50
Explanation: Lake Model Solutions
Excercise 1
We begin by initializing the variables and import the necessary modules
End of explanation
LM0 = LakeModel.LakeModel(lamb,alpha,b,d)
x0 = LM0.find_steady_state()# initial conditions
print "Initial Steady State: ", x0
Explanation: Now construct the class containing the initial conditions of the problem
End of explanation
LM1 = LakeModel.LakeModel(0.2,alpha,b,d)
xbar = LM1.find_steady_state() # new steady state
X_path = vstack(LM1.simulate_stock_path(x0*N0,T)) # simulate stocks
x_path = vstack(LM1.simulate_rate_path(x0,T)) # simulate rates
print "New Steady State: ", xbar
Explanation: New legislation changes $\lambda$ to $0.2$
End of explanation
figure(figsize=[10,9])
subplot(3,1,1)
plot(X_path[:,0])
title(r'Employment')
subplot(3,1,2)
plot(X_path[:,1])
title(r'Unemployment')
subplot(3,1,3)
plot(X_path.sum(1))
title(r'Labor Force')
Explanation: Now plot stocks
End of explanation
figure(figsize=[10,6])
subplot(2,1,1)
plot(x_path[:,0])
hlines(xbar[0],0,T,'r','--')
title(r'Employment Rate')
subplot(2,1,2)
plot(x_path[:,1])
hlines(xbar[1],0,T,'r','--')
title(r'Unemployment Rate')
Explanation: And how the rates evolve:
End of explanation
bhat = 0.003
T_hat = 20
LM1 = LakeModel.LakeModel(lamb,alpha,bhat,d)
Explanation: We see that it takes 20 periods for the economy to converge to it's new steady state levels
Exercise 2
This next exercise has the economy expriencing a boom in entrances to the labor market and then later returning to the original levels. For 20 periods the economy has a new entry rate into the labor market
End of explanation
X_path1 = vstack(LM1.simulate_stock_path(x0*N0,T_hat)) # simulate stocks
x_path1 = vstack(LM1.simulate_rate_path(x0,T_hat)) # simulate rates
Explanation: We simulate for 20 periods at the new parameters
End of explanation
X_path2 = vstack(LM0.simulate_stock_path(X_path1[-1,:2],T-T_hat+1)) # simulate stocks
x_path2 = vstack(LM0.simulate_rate_path(x_path1[-1,:2],T-T_hat+1)) # simulate rates
Explanation: Now using the state after 20 periods for the new initial conditions we simulate for the additional 30 periods
End of explanation
x_path = vstack([x_path1,x_path2[1:]]) # note [1:] to avoid doubling period 20
X_path = vstack([X_path1,X_path2[1:]]) # note [1:] to avoid doubling period 20
figure(figsize=[10,9])
subplot(3,1,1)
plot(X_path[:,0])
title(r'Employment')
subplot(3,1,2)
plot(X_path[:,1])
title(r'Unemployment')
subplot(3,1,3)
plot(X_path.sum(1))
title(r'Labor Force')
Explanation: Finally we combine these two paths and plot
End of explanation
figure(figsize=[10,6])
subplot(2,1,1)
plot(x_path[:,0])
hlines(x0[0],0,T,'r','--')
title(r'Employment Rate')
subplot(2,1,2)
plot(x_path[:,1])
hlines(x0[1],0,T,'r','--')
title(r'Unemployment Rate')
Explanation: And the rates:
End of explanation |
8,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detrending, Stylized Facts and the Business Cycle
In an influential article, Harvey and Jaeger (1993) described the use of unobserved components models (also known as "structural time series models") to derive stylized facts of the business cycle.
Their paper begins
Step1: Unobserved Components
The unobserved components model available here can be written as
Step2: To get a sense of these three variables over the timeframe, we can plot them
Step3: Model
Since the data is already seasonally adjusted and there are no obvious explanatory variables, the generic model considered is
Step4: We now fit the following models
Step5: Once we have fit these models, there are a variety of ways to display the information. Looking at the model of US GNP, we can summarize the fit of the model using the summary method on the fit object.
Step6: For unobserved components models, and in particular when exploring stylized facts in line with point (2) from the introduction, it is often more instructive to plot the estimated unobserved components (e.g. the level, trend, and cycle) themselves to see if they provide a meaningful description of the data.
The plot_components method of the fit object can be used to show plots and confidence intervals of each of the estimated states, as well as a plot of the observed data versus the one-step-ahead predictions of the model to assess fit.
Step7: Harvey and Jaeger summarize the models in another way to highlight the relative importances of the trend and cyclical components; below we replicate their Table I. The values we find are broadly consistent with, but different in the particulars from, the values from their table.
Step8: Finally, Harvey and Jaeger compare the smoothed cyclical component to the output of the HP-filter. In contrast to their results (see for example figures 4(a) and 4(b)), we find that the cyclical component from the unobserved components model is quite similar to the output from the HP filter. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import dismalpy as dp
import matplotlib.pyplot as plt
from IPython.display import display, Latex
Explanation: Detrending, Stylized Facts and the Business Cycle
In an influential article, Harvey and Jaeger (1993) described the use of unobserved components models (also known as "structural time series models") to derive stylized facts of the business cycle.
Their paper begins:
"Establishing the 'stylized facts' associated with a set of time series is widely considered a crucial step
in macroeconomic research ... For such facts to be useful they should (1) be consistent with the stochastic
properties of the data and (2) present meaningful information."
In particular, they make the argument that these goals are often better met using the unobserved components approach rather than the popular Hodrick-Prescott filter or Box-Jenkins ARIMA modeling techniques.
DismalPy has the ability to perform all three types of analysis, and below we follow the steps of their paper, using a slightly updated dataset.
End of explanation
# Datasets
from pandas.io.data import DataReader
# Get the raw data
start = '1948-01'
end = '2008-01'
us_gnp = DataReader('GNPC96', 'fred', start=start, end=end)
us_gnp_deflator = DataReader('GNPDEF', 'fred', start=start, end=end)
us_monetary_base = DataReader('AMBSL', 'fred', start=start, end=end).resample('QS')
recessions = DataReader('USRECQ', 'fred', start=start, end=end).resample('QS', how='last').values[:,0]
# Construct the dataframe
dta = pd.concat(map(np.log, (us_gnp, us_gnp_deflator, us_monetary_base)), axis=1)
dta.columns = ['US GNP','US Prices','US monetary base']
dates = dta.index._mpl_repr()
Explanation: Unobserved Components
The unobserved components model available here can be written as:
$$
y_t = \underbrace{\mu_{t}}{\text{trend}} + \underbrace{\gamma{t}}{\text{seasonal}} + \underbrace{c{t}}{\text{cycle}} + \sum{j=1}^k \underbrace{\beta_j x_{jt}}{\text{explanatory}} + \underbrace{\varepsilon_t}{\text{irregular}}
$$
see Durbin and Koopman 2012, Chapter 3 for notation and additional details. Notice that different specifications for the different individual components can support a wide range of models. The specific models considered in the paper and below are specializations of this general equation.
Trend
The trend component is a dynamic extension of a regression model that includes an intercept and linear time-trend.
$$
\begin{align}
\underbrace{\mu_{t+1}}{\text{level}} & = \mu_t + \nu_t + \eta{t+1} \qquad & \eta_{t+1} \sim N(0, \sigma_\eta^2) \\
\underbrace{\nu_{t+1}}{\text{trend}} & = \nu_t + \zeta{t+1} & \zeta_{t+1} \sim N(0, \sigma_\zeta^2) \
\end{align}
$$
where the level is a generalization of the intercept term that can dynamically vary across time, and the trend is a generalization of the time-trend such that the slope can dynamically vary across time.
For both elements (level and trend), we can consider models in which:
The element is included vs excluded (if the trend is included, there must also be a level included).
The element is deterministic vs stochastic (i.e. whether or not the variance on the error term is confined to be zero or not)
The only additional parameters to be estimated via MLE are the variances of any included stochastic components.
This leads to the following specifications:
| | Level | Trend | Stochastic Level | Stochastic Trend |
|----------------------------------------------------------------------|-------|-------|------------------|------------------|
| Constant | ✓ | | | |
| Local Level <br /> (random walk) | ✓ | | ✓ | |
| Deterministic trend | ✓ | ✓ | | |
| Local level with deterministic trend <br /> (random walk with drift) | ✓ | ✓ | ✓ | |
| Local linear trend | ✓ | ✓ | ✓ | ✓ |
| Smooth trend <br /> (integrated random walk) | ✓ | ✓ | | ✓ |
Seasonal
The seasonal component is written as:
<span>$$
\gamma_t = - \sum_{j=1}^{s-1} \gamma_{t+1-j} + \omega_t \qquad \omega_t \sim N(0, \sigma_\omega^2)
$$</span>
The periodicity (number of seasons) is s, and the defining character is that (without the error term), the seasonal components sum to zero across one complete cycle. The inclusion of an error term allows the seasonal effects to vary over time.
The variants of this model are:
The periodicity s
Whether or not to make the seasonal effects stochastic.
If the seasonal effect is stochastic, then there is one additional parameter to estimate via MLE (the variance of the error term).
Cycle
The cyclical component is intended to capture cyclical effects at time frames much longer than captured by the seasonal component. For example, in economics the cyclical term is often intended to capture the business cycle, and is then expected to have a period between "1.5 and 12 years" (see Durbin and Koopman).
The cycle is written as:
<span>$$
\begin{align}
c_{t+1} & = c_t \cos \lambda_c + c_t^ \sin \lambda_c + \tilde \omega_t \qquad & \tilde \omega_t \sim N(0, \sigma_{\tilde \omega}^2) \\
c_{t+1}^ & = -c_t \sin \lambda_c + c_t^ \cos \lambda_c + \tilde \omega_t^ & \tilde \omega_t^* \sim N(0, \sigma_{\tilde \omega}^2)
\end{align}
$$</span>
The parameter $\lambda_c$ (the frequency of the cycle) is an additional parameter to be estimated by MLE. If the seasonal effect is stochastic, then there is one another parameter to estimate (the variance of the error term - note that both of the error terms here share the same variance, but are assumed to have independent draws).
Irregular
The irregular component is assumed to be a white noise error term. Its variance is a parameter to be estimated by MLE; i.e.
$$
\varepsilon_t \sim N(0, \sigma_\varepsilon^2)
$$
In some cases, we may want to generalize the irregular component to allow for autoregressive effects:
$$
\varepsilon_t = \rho(L) \varepsilon_{t-1} + \epsilon_t, \qquad \epsilon_t \sim N(0, \sigma_\epsilon^2)
$$
In this case, the autoregressive parameters would also be estimated via MLE.
Regression effects
We may want to allow for explanatory variables by including additional terms
<span>$$
\sum_{j=1}^k \beta_j x_{jt}
$$</span>
or for intervention effects by including
<span>$$
\begin{align}
\delta w_t \qquad \text{where} \qquad w_t & = 0, \qquad t < \tau, \\
& = 1, \qquad t \ge \tau
\end{align}
$$</span>
These additional parameters could be estimated via MLE or by including them as components of the state space formulation.
Data
Following Harvey and Jaeger, we will consider the following time series:
US real GNP, "output", (GNPC96)
US GNP implicit price deflator, "prices", (GNPDEF)
US monetary base, "money", (AMBSL)
The time frame in the original paper varied across series, but was broadly 1954-1989. Below we use data from the period 1948-2008 for all series. Although the unobserved components approach allows isolating a seasonal component within the model, the series considered in the paper, and here, are already seasonally adjusted.
All data series considered here are taken from Federal Reserve Economic Data (FRED). Conveniently, the Python library Pandas has the ability to download data from FRED directly.
End of explanation
# Plot the data
ax = dta.plot(figsize=(13,3))
ylim = ax.get_ylim()
ax.xaxis.grid()
ax.fill_between(dates, ylim[0]+1e-5, ylim[1]-1e-5, recessions, facecolor='k', alpha=0.1);
Explanation: To get a sense of these three variables over the timeframe, we can plot them:
End of explanation
# Model specifications
# Unrestricted model, using string specification
unrestricted_model = {
'level': 'local linear trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
}
# Unrestricted model, setting components directly
# This is an equivalent, but less convenient, way to specify a
# local linear trend model with a stochastic damped cycle:
# unrestricted_model = {
# 'irregular': True, 'level': True, 'stochastic_level': True, 'trend': True, 'stochastic_trend': True,
# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
# }
# The restricted model forces a smooth trend
restricted_model = {
'level': 'smooth trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
}
# Restricted model, setting components directly
# This is an equivalent, but less convenient, way to specify a
# smooth trend model with a stochastic damped cycle. Notice
# that the difference from the local linear trend model is that
# `stochastic_level=False` here.
# unrestricted_model = {
# 'irregular': True, 'level': True, 'stochastic_level': False, 'trend': True, 'stochastic_trend': True,
# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
# }
Explanation: Model
Since the data is already seasonally adjusted and there are no obvious explanatory variables, the generic model considered is:
$$
y_t = \underbrace{\mu_{t}}{\text{trend}} + \underbrace{c{t}}{\text{cycle}} + \underbrace{\varepsilon_t}{\text{irregular}}
$$
The irregular will be assumed to be white noise, and the cycle will be stochastic and damped. The final modeling choice is the specification to use for the trend component. Harvey and Jaeger consider two models:
Local linear trend (the "unrestricted" model)
Smooth trend (the "restricted" model, since we are forcing $\sigma_\eta = 0$)
Below, we construct kwargs dictionaries for each of these model types. Notice that rather that there are two ways to specify the models. One way is to specify components directly, as in the table above. The other way is to use string names which map to various specifications.
End of explanation
# Output
output_mod = dp.ssm.UnobservedComponents(dta['US GNP'], **unrestricted_model)
output_res = output_mod.fit(method='powell', disp=False)
# Prices
prices_mod = dp.ssm.UnobservedComponents(dta['US Prices'], **unrestricted_model)
prices_res = prices_mod.fit(method='powell', disp=False)
prices_restricted_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **restricted_model)
prices_restricted_res = prices_restricted_mod.fit(method='powell', disp=False)
# Money
money_mod = dp.ssm.UnobservedComponents(dta['US monetary base'], **unrestricted_model)
money_res = money_mod.fit(method='powell', disp=False)
money_restricted_mod = dp.ssm.UnobservedComponents(dta['US monetary base'], **restricted_model)
money_restricted_res = money_restricted_mod.fit(method='powell', disp=False)
Explanation: We now fit the following models:
Output, unrestricted model
Prices, unrestricted model
Prices, restricted model
Money, unrestricted model
Money, restricted model
End of explanation
print(output_res.summary())
Explanation: Once we have fit these models, there are a variety of ways to display the information. Looking at the model of US GNP, we can summarize the fit of the model using the summary method on the fit object.
End of explanation
fig = output_res.plot_components(legend_loc='lower right', figsize=(15, 9));
Explanation: For unobserved components models, and in particular when exploring stylized facts in line with point (2) from the introduction, it is often more instructive to plot the estimated unobserved components (e.g. the level, trend, and cycle) themselves to see if they provide a meaningful description of the data.
The plot_components method of the fit object can be used to show plots and confidence intervals of each of the estimated states, as well as a plot of the observed data versus the one-step-ahead predictions of the model to assess fit.
End of explanation
# Create Table I
table_i = np.zeros((5,6))
start = dta.index[0]
end = dta.index[-1]
time_range = '%d:%d-%d:%d' % (start.year, start.quarter, end.year, end.quarter)
models = [
('US GNP', time_range, 'None'),
('US Prices', time_range, 'None'),
('US Prices', time_range, r'$\sigma_\eta^2 = 0$'),
('US monetary base', time_range, 'None'),
('US monetary base', time_range, r'$\sigma_\eta^2 = 0$'),
]
index = pd.MultiIndex.from_tuples(models, names=['Series', 'Time range', 'Restrictions'])
parameter_symbols = [
r'$\sigma_\zeta^2$', r'$\sigma_\eta^2$', r'$\sigma_\kappa^2$', r'$\rho$',
r'$2 \pi / \lambda_c$', r'$\sigma_\varepsilon^2$',
]
i = 0
for res in (output_res, prices_res, prices_restricted_res, money_res, money_restricted_res):
if res.model.stochastic_level:
(sigma_irregular, sigma_level, sigma_trend,
sigma_cycle, frequency_cycle, damping_cycle) = res.params
else:
(sigma_irregular, sigma_level,
sigma_cycle, frequency_cycle, damping_cycle) = res.params
sigma_trend = np.nan
period_cycle = 2 * np.pi / frequency_cycle
table_i[i, :] = [
sigma_level*1e7, sigma_trend*1e7,
sigma_cycle*1e7, damping_cycle, period_cycle,
sigma_irregular*1e7
]
i += 1
pd.set_option('float_format', lambda x: '%.4g' % np.round(x, 2) if not np.isnan(x) else '-')
table_i = pd.DataFrame(table_i, index=index, columns=parameter_symbols)
table_i
Explanation: Harvey and Jaeger summarize the models in another way to highlight the relative importances of the trend and cyclical components; below we replicate their Table I. The values we find are broadly consistent with, but different in the particulars from, the values from their table.
End of explanation
cycle, trend = sm.tsa.filters.hpfilter(dta['US monetary base'])
fig, axes = plt.subplots(2, figsize=(13,4))
axes[0].plot(dta.index, money_restricted_res.cycle.smoothed, label='Cycle (smoothed)')
axes[0].legend(loc='upper left')
axes[1].plot(cycle.index, cycle, label='Cycle (HP filter)')
axes[1].legend(loc='upper left');
Explanation: Finally, Harvey and Jaeger compare the smoothed cyclical component to the output of the HP-filter. In contrast to their results (see for example figures 4(a) and 4(b)), we find that the cyclical component from the unobserved components model is quite similar to the output from the HP filter.
End of explanation |
8,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What does the following code do?
Step1: What does the following code do?
Step2: What does the following code do?
Step3: What does the following code do?
Step4: What does the following code do?
What the heck is linspace?
What are we plotting?
Step7: Put together | Python Code:
d = pd.read_csv("data/dataset_0.csv")
fig, ax = plt.subplots()
ax.plot(d.x,d.y,'o')
Explanation: What does the following code do?
End of explanation
def linear(x,a,b):
return a + b*x
Explanation: What does the following code do?
End of explanation
def linear(x,a,b):
return a + b*x
def linear_r(param,x,y):
return linear(x,param[0],param[1]) - y
Explanation: What does the following code do?
End of explanation
def linear_r(param,x,y): # copied from previous cell
return linear(x,param[0],param[1]) - y # copied from previous cell
param_guesses = [1,1]
fit = scipy.optimize.least_squares(linear_r,param_guesses,
args=(d.x,d.y))
fit_a = fit.x[0]
fit_b = fit.x[1]
sum_of_square_residuals = fit.cost
Explanation: What does the following code do?
End of explanation
x_range = np.linspace(np.min(d.x),np.max(d.x),100)
fig, ax = plt.subplots()
ax.plot(d.x,d.y,"o")
ax.plot(x_range,linear(x_range,fit_a,fit_b))
Explanation: What does the following code do?
What the heck is linspace?
What are we plotting?
End of explanation
def linear(x,a,b):
Linear model of x using a (slope) and b (intercept)
return a + b*x
def linear_r(param,x,y):
Residuals function for linear
return linear(x,param[0],param[1]) - y
fig, ax = plt.subplots()
# Read data
d = pd.read_csv("data/dataset_0.csv")
ax.plot(d.x,d.y,'o')
# Perform regression
param_guesses = [1,1]
fit = scipy.optimize.least_squares(linear_r,param_guesses,args=(d.x,d.y))
fit_a = fit.x[0]
fit_b = fit.x[1]
sum_of_square_residuals = fit.cost
# Plot result
x_range = np.linspace(np.min(d.x),np.max(d.x),100)
ax.plot(x_range,linear(x_range,fit_a,fit_b))
fit
Explanation: Put together
End of explanation |
8,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prevalence of Personal Attacks
In this notebook, we do some basic investigation into the frequency of personal attacks on Wikipedia. We will attempt to provide some insight into the following questions
Step1: Q
Step2: A higher proportion of user talk comments are attacks. At a conservative threshold of 0.8, 1
Step3: This is the same plot as above, except that we used the model scores instead of the mean human scores. The model gives fewer comments a score above 0.5. Hence, it suggests that a smaller proportion of comments is personal attacks. This is just for comparison. We can simply use the human scores to answer this question.
Step4: Methodology 2
A potential downside of measuring personal attacks as we did above is that, when we pick a threshold, we don't count any of the attacking comments with a score below the threshold. If we interpret the score as the probability that a human would consider the comment an attack, then we can use the scores to compute the expected fraction of attacks in a corpus. To get a measure of uncertainty, we can use the scores as probabilities in a simulation as follows
Step5: This method gives that roughly 2.3% of user talk comments are personal attacks. I would consider this method less reliable than the above, since here we are relying on the fact that each annotator is high quality. Just having one trigger happy annotator for each comment already gives an expected attack fraction of 10%.
Step6: The model over-predicts relative to the annotators for method M2, since the model's score distribution is skewed right in the [0, 0.2] interval and that is where most of the data is. Again, we don't need the model scores to answer this question, I am just including it for comparison.
Step7: Q
Step8: 1.6% of registered users have written at least one comment with a score above 0.5
0.5% of registered users have written at least one comment with a score above 0.8.
Methodology 2
Step9: These results are inflated, due to model's right skew. This means that users with a lot of comments are very likely to be assigned at least one attacking comment in the simulation. I would not recommend using this method.
Q
Step10: Methodology 2
Step11: Just as above, these results are inflated, due to model's right skew.
Q
Step12: The is a strong yearly pattern. The fraction of attacks peaked in 2008, which is when participation peaked as well, Since 2013, we have seen an increase in the fraction of attacks.
Q
Step13: Not much of a pattern, large intervals relative to the trend.
Q | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from load_utils import *
from analysis_utils import compare_groups
d = load_diffs()
df_events, df_blocked_user_text = load_block_events_and_users()
Explanation: Prevalence of Personal Attacks
In this notebook, we do some basic investigation into the frequency of personal attacks on Wikipedia. We will attempt to provide some insight into the following questions:
What fraction of comments are personal attacks?
What fraction of users have made a personal attack?
What fraction of users have been attacked on their user page?
Are there any temporal trends in the frequency of attacks?
We have 2 separate types of data at our disposal. First, we have a random sample of roughly 100k human-labeled comments. Each comment was labeled by 10 separate people as to whether the comment is a personal attack. We can average these labels to get a single score. Second, we have the full history of comments with attack probabilities generated by a machine learning model. The machine learning model is imperfect. In particular, it is more conservative than human annotators about making "extreme" predictions. In particular, it is less likely to give non-attacks a score of 0 and also less likely to give attacks a score greater than 0.5, compared to humans (for more details, see the "Model Checking" notebook). For all questions above apart from the first, we will need to use the model scores.
End of explanation
def prevalence_m1(d, sample, score, ts = np.arange(0.5, 0.91, 0.1)):
d_a = pd.concat([pd.DataFrame({'threshold': t, 'attack': d[sample].query("ns=='article'")[score] >= t }) for t in ts], axis = 0)
d_a['condition'] = 'article talk'
d_u = pd.concat([pd.DataFrame({'threshold': t, 'attack': d[sample].query("ns=='user'")[score] >= t }) for t in ts], axis = 0)
d_u['condition'] = 'user talk'
df = pd.concat([d_u, d_a])
df['attack'] = df['attack'] * 100
sns.pointplot(x="threshold", y="attack", hue="condition", data=df)
plt.ylabel('percent of comments that are attacks')
plt.ylim(-0.1, plt.ylim()[1])
plt.savefig('../../paper/figs/threshold_prevalence_%s_%s' % (sample, score))
# human annotations
prevalence_m1(d, 'annotated', 'recipient_score')
Explanation: Q: What fraction of comments are personal attacks?
Methodology 1:
Compute fraction of comments predicted to be attacks for different classification thresholds. We can rely on just the human labeled data for this. However, we can also see what results we would get if we used the model predictions.
End of explanation
# model predictions on human annotated data
prevalence_m1(d, 'annotated', 'pred_recipient_score')
Explanation: A higher proportion of user talk comments are attacks. At a conservative threshold of 0.8, 1:400 user talk comments is an attack.
End of explanation
#prevalence_m1(d, 'sample', 'pred_recipient_score')
Explanation: This is the same plot as above, except that we used the model scores instead of the mean human scores. The model gives fewer comments a score above 0.5. Hence, it suggests that a smaller proportion of comments is personal attacks. This is just for comparison. We can simply use the human scores to answer this question.
End of explanation
def prevalence_m2(d, sample, score, n = 100):
def compute_ci(a, n = 10):
m = a.shape[0]
v = a.values.reshape((m,1))
fs = np.sum(np.random.rand(m, n) < v, axis = 0) / m
print("Percent of comments labeled as attacks: (%.3f, %.3f)" % ( 100*np.percentile(fs, 2.5), 100*np.percentile(fs, 97.5)))
print('\nUser:')
compute_ci(d[sample].query("ns=='user'")[score])
print('Article:')
compute_ci(d[sample].query("ns=='article'")[score])
# human annotations
prevalence_m2(d, 'annotated', 'recipient_score')
Explanation: Methodology 2
A potential downside of measuring personal attacks as we did above is that, when we pick a threshold, we don't count any of the attacking comments with a score below the threshold. If we interpret the score as the probability that a human would consider the comment an attack, then we can use the scores to compute the expected fraction of attacks in a corpus. To get a measure of uncertainty, we can use the scores as probabilities in a simulation as follows: for each comment, label it as a personal attack with the probability assigned by the model/annotators. Count the fraction of comments labeled as personal attacks. Repeat to get a distribution and take 95% interval.
End of explanation
# model predictions on annotated data
prevalence_m2(d, 'annotated', 'pred_recipient_score')
Explanation: This method gives that roughly 2.3% of user talk comments are personal attacks. I would consider this method less reliable than the above, since here we are relying on the fact that each annotator is high quality. Just having one trigger happy annotator for each comment already gives an expected attack fraction of 10%.
End of explanation
#prevalence_m2(d, 'sample', 'pred_recipient_score')
Explanation: The model over-predicts relative to the annotators for method M2, since the model's score distribution is skewed right in the [0, 0.2] interval and that is where most of the data is. Again, we don't need the model scores to answer this question, I am just including it for comparison.
End of explanation
ts = np.arange(0.5, 0.91, 0.1)
dfs = []
for t in ts:
dfs.append (\
d['2015'].assign(attack= lambda x: 100 * (x.pred_recipient_score > t))\
.groupby(['user_text', 'author_anon'], as_index = False)['attack'].max()\
.assign(threshold = t)
)
sns.pointplot(x='threshold', y = 'attack', hue = 'author_anon', data = pd.concat(dfs))
plt.ylabel('percent of attacking users')
Explanation: Q: What fraction of users have made a personal attack?
Methodology 1:
Take unsampled data from 2015. Compute fraction of people who authored one comment above the threshold for different thresholds. Note: this requires using the model scores.
End of explanation
def simulate_num_attacks(df, group_col = 'user_text'):
n = df.assign( uniform = np.random.rand(df.shape[0], 1))\
.assign(is_attack = lambda x: (x.pred_recipient_score >= x.uniform).astype(int))\
.groupby(group_col)['is_attack']\
.max()\
.sum()
return n
n_attacks = [simulate_num_attacks(d['2015']) for i in range(100)]
100 * (np.percentile(n_attacks, [2.5, 97.5]) / len(d['2015'].user_text.unique()))
# ignore anon users
d_temp = d['2015'].query('not author_anon and not recipient_anon')
n_attacks = [simulate_num_attacks(d_temp) for i in range(100)]
100 * (np.percentile(n_attacks, [2.5, 97.5]) / len(d_temp.user_text.unique()))
Explanation: 1.6% of registered users have written at least one comment with a score above 0.5
0.5% of registered users have written at least one comment with a score above 0.8.
Methodology 2:
Take unsampled data. For each comment, let it be an attack with probability equal to the model prediction. Count the number of users that have made at least 1 attack. Repeat.
End of explanation
dfs = []
for t in ts:
dfs.append (\
d['2015'].query("not own_page and ns=='user'")\
.assign(attack = lambda x: 100 * (x.pred_recipient_score >= t))\
.groupby(['page_title', 'recipient_anon'], as_index = False)['attack'].max()\
.assign(threshold = t)
)
sns.pointplot(x='threshold', y = 'attack', hue = 'recipient_anon', data = pd.concat(dfs))
plt.ylabel('percent of attacked users')
Explanation: These results are inflated, due to model's right skew. This means that users with a lot of comments are very likely to be assigned at least one attacking comment in the simulation. I would not recommend using this method.
Q: What fraction of users have been attacked on their user page?
Methodology:
Take unsampled data from 2015. Compute fraction of people who recieved one comment above the threshold for different thresholds.
End of explanation
n_attacks = [simulate_num_attacks(d['2015'].query("ns=='user'"), group_col = 'page_title') for i in range(10)]
100 * (np.percentile(n_attacks, [2.5, 97.5]) / len(d['2015'].page_title.unique()))
# ignore anon users
d_temp = d['2015'].query("not author_anon and not recipient_anon and ns=='user'")
n_attacks = [simulate_num_attacks(d_temp, group_col = 'page_title') for i in range(10)]
100 * (np.percentile(n_attacks, [2.5, 97.5]) / len(d_temp.page_title.unique()))
Explanation: Methodology 2:
Take unsampled data. For each comment, let it be an attack with probability equal to the model prediction. Count the number of users that have received at least 1 attack. Repeat.
End of explanation
df_span = d['sample'].query('year > 2003 & year < 2016')
df_span['is_attack'] = (df_span['pred_recipient_score'] > 0.5).astype(int) * 100
x = 'year'
o = range(2004, 2016)
sns.pointplot(x=x, y = 'is_attack', data = df_span, order = o)
plt.ylabel('Percent of comments that are attacks')
Explanation: Just as above, these results are inflated, due to model's right skew.
Q: Has the proportion of attacks changed year over year?
End of explanation
x = 'month'
o = range(1, 13)
sns.pointplot(x=x, y = 'is_attack', data = df_span, order = o)
plt.ylabel('Percent of comments that are attacks')
Explanation: The is a strong yearly pattern. The fraction of attacks peaked in 2008, which is when participation peaked as well, Since 2013, we have seen an increase in the fraction of attacks.
Q: Is there a seasonal effect?
End of explanation
x = 'hour'
o = range(1, 24)
sns.pointplot(x=x, y = 'is_attack', data = df_span, order = o)
plt.ylabel('Percent of comments that are attacks')
Explanation: Not much of a pattern, large intervals relative to the trend.
Q: Is there an hour of day effect?
End of explanation |
8,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
circle()
The following function, circle(xy, radius, kwargs=None), is a customised wrapper for patches.Ellipse to draw nice circles on a figure even if the axes have very different dimensions. I explain it here with some examples.
Step2: Load the circle-function
(You can find it in the notebook adashof.ipynb, in the same repo as this notebook).
Step3: Linear example
The following are four different plots with linear scales to illustrate the problem addressed and the usage of circle
Step4: Semilog and loglog example
The final two examples show that this method also works for semilog and loglog plots. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rc, patches
# Increase font size, set CM as default text, and use LaTeX
rc('font', **{'size': 16, 'family': 'serif', 'serif': ['Computer Modern Roman']})
rc('text', usetex=True)
# Define colours (taken from http://colorbrewer2.org)
clr = ['#377eb8', '#e41a1c', '#4daf4a', '#984ea3', '#ff7f00', '#ffff33', '#a65628']
Explanation: circle()
The following function, circle(xy, radius, kwargs=None), is a customised wrapper for patches.Ellipse to draw nice circles on a figure even if the axes have very different dimensions. I explain it here with some examples.
End of explanation
%load -s circle adashof.py
def circle(xy, radius, kwargs=None):
Create circle on figure with axes of different sizes.
Plots a circle on the current axes using `plt.Circle`, taking into account
the figure size and the axes units.
It is done by plotting in the figure coordinate system, taking the aspect
ratio into account. In this way, the data dimensions do not matter.
However, if you adjust `xlim` or `ylim` after plotting `circle`, it will
screw them up; set `plt.axis` before calling `circle`.
Parameters
----------
xy, radius, kwars :
As required for `plt.Circle`.
# Get current figure and axis
fig = plt.gcf()
ax = fig.gca()
# Calculate figure dimension ratio width/height
pr = fig.get_figwidth()/fig.get_figheight()
# Get the transScale (important if one of the axis is in log-scale)
tscale = ax.transScale + (ax.transLimits + ax.transAxes)
ctscale = tscale.transform_point(xy)
cfig = fig.transFigure.inverted().transform(ctscale)
# Create circle
if kwargs == None:
circ = patches.Ellipse(cfig, radius, radius*pr,
transform=fig.transFigure)
else:
circ = patches.Ellipse(cfig, radius, radius*pr,
transform=fig.transFigure, **kwargs)
# Draw circle
ax.add_artist(circ)
Explanation: Load the circle-function
(You can find it in the notebook adashof.ipynb, in the same repo as this notebook).
End of explanation
# Generate some data to plot
x = np.arange(101)/100*2*np.pi
y = np.sin(x)
# Circle centres
cxy = (np.arange(5)*np.pi/2, np.sin(np.arange(5)*np.pi/2))
## 1.a Using plt.Circle on equal axes
# Create figure
fig1a = plt.figure()
# Set axis to equal
plt.axis('equal')
# Plot data and set limits
plt.plot(x, y, '-', c=clr[6], lw=2)
plt.xlim([min(x), max(x)])
# Draw circles with plt.Circle
# (`clip_on: False` ensures that the circles are not cut-off at fig-border.)
for i in range(5):
circ = plt.Circle((cxy[0][i], cxy[1][i]), .25, **{'color':clr[i], 'clip_on': False})
plt.gca().add_artist(circ)
# Set labels
plt.title('1.a plt.Circle with equal axes')
plt.text(1, -1.5, r'$y = \rm{sin}(x)$', fontsize=20)
plt.xlabel('x')
plt.ylabel('y')
# Multiply y-values by 5, to make the effect of unequal axes more apparent
y *= 5
cxy = (np.arange(5)*np.pi/2, 5*np.sin(np.arange(5)*np.pi/2))
## 1.b Using plt.Circle on unequal axes
# Create figure
fig1b = plt.figure()
# Plot data and set limits
plt.plot(x, y, '-', c=clr[6], lw=2)
plt.axis([min(x), max(x), 1.2*min(y), 1.2*max(y)])
# Draw circles with plt.Circle
for i in range(5):
circ = plt.Circle((cxy[0][i], cxy[1][i]), .25, **{'color':clr[i], 'clip_on': False})
plt.gca().add_artist(circ)
# Set labels
plt.title('1.b plt.Circle with unequal axes')
plt.text(1, -5, r'$y = 5\times\rm{sin}(x)$', fontsize=20)
plt.xlabel('x')
plt.ylabel('y')
## 1.c Using patches.Ellipse on unequal axes
# Create figure
fig1c = plt.figure()
# Plot data and set limits
plt.plot(x, y, '-', c=clr[6], lw=2)
plt.axis([min(x), max(x), 1.2*min(y), 1.2*max(y)])
# Calculate width and height of Ellipse to create an apparent circle
factor = fig1c.get_figwidth()*(max(1.2*y)-min(1.2*y))/fig1c.get_figheight()/(max(x)-min(x))
# Draw circles with patches.Ellipse
for i in range(5):
circ = patches.Ellipse((cxy[0][i], cxy[1][i]), .5, .5*factor,
**{'color':clr[i], 'clip_on': False})
plt.gca().add_artist(circ)
# Set labels
plt.title('1.c patches.Ellipse with unequal axes')
plt.text(1, -5, r'$y = 5\times\rm{sin}(x)$', fontsize=20)
plt.xlabel('x')
plt.ylabel('y')
## 1.d Using circle on unequal axes
# Create figure
fig1d = plt.figure()
# Plot data and set limits (before plotting the circles!)
plt.plot(x, y, '-', c=clr[6], lw=2)
plt.axis([min(x), max(x), 1.2*min(y), 1.2*max(y)])
# Draw circles with circle
for i in range(5):
circle((cxy[0][i], cxy[1][i]), .06, {'color':clr[i], 'clip_on': False})
# Set labels
plt.title('1.d `circle` with unequal axes')
plt.text(1, -5, r'$y = 5\times\rm{sin}(x)$', fontsize=20)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Linear example
The following are four different plots with linear scales to illustrate the problem addressed and the usage of circle:
1.a Using plt.Circle to draw circles on a figure with equal axes.
1.b Using plt.Circle to draw circles on a figure with unequal axes.
1.c Using patches.Ellipse on a figure with unequal axes to draw apparent circles.
1.d Using this custom circle function on a figure with unequal axes to plot circles.
End of explanation
# Generate some data to plot
lx = np.arange(1, 102)
ly = lx**2
# Circle centres
lcxy = (np.arange(3)*50+1, (np.arange(3)*50+1)**2)
## 2.a Semilog
# Create figure
fig2a = plt.figure()
# Plot data and set limits
plt.semilogy(lx, ly, '-', c=clr[6], lw=2)
plt.xlim([min(lx), max(lx)])
# Plot circles
for i in range(3):
circle((lcxy[0][i], lcxy[1][i]), 0.07, {'color':clr[i], 'clip_on': False})
# Set labels
plt.title('2.a `circle` with semilog-axes')
plt.text(60, 10, r'$y = x^2$', fontsize=20)
plt.xlabel('x')
plt.ylabel('y')
## 2.b Loglog
# Create figure
fig2a = plt.figure()
# Plot data and set limits
plt.loglog(lx, ly, '-', c=clr[6], lw=2)
plt.xlim([min(lx), max(lx)])
# Plot circles
for i in range(3):
circle((lcxy[0][i], lcxy[1][i]), .07, {'color':clr[i], 'clip_on': False})
# Set labels
plt.title('2.b `circle` with loglog-axes')
plt.text(20, 10, r'$y = x^2$', fontsize=20)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Semilog and loglog example
The final two examples show that this method also works for semilog and loglog plots.
End of explanation |
8,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Dynamic Programming
We have studied the theory of dynamic programming in discrete time under certainty. Let's review what we know so far, so that we can start thinking about how to take to the computer.
The Problem
We want to find a sequence ${x_t}_{t=0}^\infty$ and a function $V^*
Step3: Clearly the more points we have the better our approximation. But, more points means more computations and more time to get those approximations. Since we will be iterating over approximations, we might not want to use too many points, but be smart about the choice of points or we might want to use less points for a start and then increase the number of points once we have a good candidate solution to our fixed point problem.
In order to make it easy to define interpolated functions, we define a new class of Python object
Step4: We can now define our interpolated sinus function as follows
Step5: Optimal Growth
Let's start by computing the solution to an optimal growth problem, in which a social planner seeks to find paths ${c_t,k_t}$ such that
\begin{align}
\max_{{c_t,k_t}}&\sum_{t=0}^{\infty}\beta^{t}u(c_{t})\[.2cm]
\text{s.t. }&k_{t+1}\leq f(k_{t})+(1-\delta)k_{t}-c_{t}\[.2cm]
c_{t}\geq0,&\ k_{t}\geq0,\ k_{0}\text{ is given}.
\end{align}
As usual we assume that our utility function $u(\cdot)$ and production function $f(\cdot)$ are Neoclassical. Under these conditions we have seen that our problem satisfies the conditions of our previous theorem and thus we know a unique solution exists.
An example with analytical solution
Let's assume that $u(c)=\ln(c)$, $f(k)=k^\alpha$, and $\delta=1$. For this case we have seen that the solution implies
\begin{align}
&\text{Value Function
Step6: Let's fix the value of the fundamental parameters so we can realize computations
Step7: Now let's focus on the Value function iteration
Step8: Here we have created a grid on $[gridmin,gridmax]$ that has a number of points given by gridsize. Since we know that the Value functions is stricly concave, our grid has more points closer to zero than farther away
Step9: Now we need a function, which for given $V_0$ solves
$$\sup\limits _{y\in G(x)}\left{ U(x,y)+\beta V(y)\right}.$$
Let's use one of Scipy's optimizing routines
Step10: Since fminbound returns
$$\arg\min\limits _{y\in [\underline x,\bar x]}\left{ U(x,y)+\beta V(y)\right}$$
we have to either replace our objective function for its negative or, better yet, define a function that uses fminbound and returns the maximum and the maximizer
Step13: Note
We could have included other parameters to pass to our maximizer and maximum functions, e.g. to allow us to manipulate the options of fminbound
The Bellman Operator
Step14: Given a linear interpolation of our guess for the Value function, $V_0=w$, the first function returns a LinInterp object, which is the linear interpolation of the function generated by the Bellman Operator on the finite set of points on the grid. The second function returns what Stachurski (2009) calls a w-greedy policy, i.e. the function that maximizes the RHS of the Bellman Operator.
Now we are ready to work on the iteration
Step15: Our initial guess $V_0$
Step16: After one interation
Step17: Doing it by hand is too slow..let's automate this process
Step18: Does it look like we converged? Let's compare our estimated Value function V1 and the actual function Va and compute the error at each point.
Step19: Exercise
Use the policy function to compute the optimal policy. Compare it to the actual one
Do the same for the consumption function. Find the savings rate and plot it.
Construct the paths of consumption and capital starting from $k_0=.1$. Show the time series and the paths in the consumption-capital space
Estimate the level of steady state capital and consumption. Show graphically that it is lower than the Golden Rule Level.
Repeat the exercise with other values of $\alpha,\beta,\delta,\sigma,k_0$. Can you write a function or class such that it will generate the whole analysis for given values of the parameters and functions. Can you generalize it in order to analyze the effects of changing the utility or production functions?
Solution
Since we already have V1, we can just apply policy(V1) to get the result
Step20: Since $c = f(k) + (1-\delta) k - k'$
Step21: Since $s = f(k) - c = k' - (1-\delta) k = $ investment,
Step22: Start with $k_0=0.1$ and follow the economy for 20 periods.
Step24: Steady state level of capital and consumption are given by these last elements of these lists | Python Code:
from __future__ import division
%pylab --no-import-all
%matplotlib inline
from numpy import interp
interp?
x = np.linspace(0, np.pi, 100)
plt.figure(1)
plt.plot(x, np.sin(x), label='Actual Function')
for i in np.arange(3,11,2):
fig1 = plt.figure(1)
xp = np.linspace(0, np.pi, i)
yp = np.sin(xp)
y = interp(x, xp, yp)
plt.plot(x, y, label='Interpolation ' + str(i))
fig2 = plt.figure(2)
plt.title('Error with up to ' + str(i) + ' points in interpolation')
plt.ylabel('Error')
plt.plot(y - np.sin(x), label=str(i))
plt.legend(loc=8)
plt.figure(1)
plt.legend(loc=8)
plt.show()
fig1
fig2
Explanation: Introduction to Dynamic Programming
We have studied the theory of dynamic programming in discrete time under certainty. Let's review what we know so far, so that we can start thinking about how to take to the computer.
The Problem
We want to find a sequence ${x_t}_{t=0}^\infty$ and a function $V^*:X\to\mathbb{R}$ such that
$$V^{\ast}\left(x_{0}\right)=\sup\limits {\left{ x{t}\right} {t=0}^{\infty}}\sum\limits {t=0}^{\infty}\beta^{t}U(x_{t},x_{t+1})$$
subject to $x_{t+1}\in G(x_{t})\subseteq X\subseteq\mathbb{R}^K$, for all $t\geq0$ and $x_0\in\mathbb{R}$ given. We assume $\beta\in(0,1)$.
We have seen that we can analyze this problem by solving instead the related problem
$$V(x)=\sup\limits _{y\in G(x)}\left{ U(x,y)+\beta V(y)\right} ,\text{ for all }x\in X.$$
Basic Results
Assumptions
$G\left(x\right)$ is nonempty for all $x\in X$ ; and for all $x_{0}\in X$ and $\mathbf{x}\in \Phi (x_{0})$, $\lim\nolimits_{n\rightarrow\infty}\sum_{t=0}^{n}\beta^{t}U(x_{t},x_{t+1})$ exists and is finite.
$X$ is a compact subset of $\mathbb{R}^{K}$, $G$ is nonempty, compact-valued and continuous. Moreover,
$U:\mathbf{X}{G}\rightarrow\mathbb{R}$ is continuous, where $\mathbf{X}{G}=\left{ (x,y)\in X\times X:y\in G(x)\right}$
$U$ is strictly concave and $G$ is convex
For each $y\in X$, $U(\cdot,y)$ is strictly increasing in each of its first $K$ arguments, and $G$ is monotone in the sense that $x\leq x^{\prime}$ implies $G(x)\subset G(x^{\prime})$.
$U$ is continuously differentiable on the interior of its domain $\mathbf{X}_{G}$.
Let $\Phi (x_{t})={{x_{s}}{s=t}^{\infty}:x{s+1}\in G(x_{s})\text{, for }s=t,t+1,...}$ and assume that $\lim_{t\rightarrow\infty}\beta^{t}V\left(x_{t}\right)=0$ for all $\left(x,x_{1},x_{2},...\right)\in \Phi (x)$.
If all of these conditions are satisfied, then we have the following
Theorem
There exists a unique (value) function
$$V^(x_0)=V(x_0),$$
which is continuous, strictly increasing, strictly concave, and differentiable. Also, there exists a unique path ${x^t}{t=0}^\infty$, which starting from the given $x_0$ attains the value $V^(x_0)$. The path can be found through a unique continuous policy function $\pi: X\to X$ such that $x^_{t+1}=\pi(x^_t)$.*
Taking it to the computer
Ok. Now that we know the conditions for the existence and uniqueness (plus other characteristics) of our problem, how do we go about solving it?
The idea is going to be simple and is based on what we saw when we proved the contraction mapping theorem and the proof of the previous theorem (Yes I know...we split this in various steps and intermediate results, which might have confused you).
Remember that our Bellman Operator $T: C(X)\to C(X)$ defined as
$$T(V(x))\equiv\sup\limits _{y\in G(x)}\left{ U(x,y)+\beta V(y)\right}$$
assigns a continuous, strictly increasing, strictly concave function $T(V)$ to each continuous, increasing, and concave function $V$ defined on $X$. Since $T(V)$ is a contraction mapping, we know that if $V_0$ is any initial continuous, increasing, and concave function defined on $X$, then $T^n(V_0)\to V^*$. This is precisely what we are going to do using the computer (well we will also do it by hand for a couple of examples).
Value function iteration
So, now that we have a strategy to tackle the problem, and you have learned some basic Python at Code Academy and IPython in our other notebook, we are ready to write some code and do some dynamic economic analysis.
But before we start, there is one issue I want to highlight. Notice that our state space $X$ is not assumed to be finite, and clearly the fact that our functions are continuous imply that we cannot be in a finite problem. So how do we represent such an infinite object in a computer, which only has finite memory? The solution is to take an approximation to the function, what Stachurski (2009) calls a fitted function. There are various methods to approximate functions (see Judd (1998) for an excellent presentation). The simplest method is a linear interpolation, which is what we will use here.
The idea behind linear interpolation is quite simple. Assume we want to approximate the function $V: X\to X$, $X\subseteq\mathbb{R}$. The only thing we need is a finite set ${x_i}{i=0}^N\subseteq X$ for which we compute the value under $V$, i.e. we create the finite set of values ${V_i=V(x_i)}{i=0}^N$. Then our approximation to the function $V$, $\hat V$, will be defined as
$$\hat V(x)=V_{i-1}+\frac{V_i-V_{i-1}}{x_i-x_{i-1}}(x-x_{i-1}) \quad\text{ if } x_{i-1}\le x < x_i.$$
In principle we could construct our own interpolation function, but Scipy has already optimized approximation algorithms, so let's use that package instead. Let's see what a linear interpolation of $\sin(x)$ would look like.
End of explanation
class LinInterp:
"Provides linear interpolation in one dimension."
def __init__(self, X, Y):
Parameters: X and Y are sequences or arrays
containing the (x,y) interpolation points.
self.X, self.Y = X, Y
def __call__(self, z):
Parameters: z is a number, sequence or array.
This method makes an instance f of LinInterp callable,
so f(z) returns the interpolation value(s) at z.
if isinstance(z, int) or isinstance(z, float):
return interp ([z], self.X, self.Y)[0]
else:
return interp(z, self.X, self.Y)
Explanation: Clearly the more points we have the better our approximation. But, more points means more computations and more time to get those approximations. Since we will be iterating over approximations, we might not want to use too many points, but be smart about the choice of points or we might want to use less points for a start and then increase the number of points once we have a good candidate solution to our fixed point problem.
In order to make it easy to define interpolated functions, we define a new class of Python object
End of explanation
xp = np.linspace(0, np.pi, 10)
yp = np.sin(xp)
oursin = LinInterp(xp, yp)
plt.plot(oursin(x));
Explanation: We can now define our interpolated sinus function as follows
End of explanation
%%file optgrowthfuncs.py
def U(c, sigma=1):
'''This function returns the value of utility when the CRRA
coefficient is sigma. I.e.
u(c,sigma)=(c**(1-sigma)-1)/(1-sigma) if sigma!=1
and
u(c,sigma)=ln(c) if sigma==1
Usage: u(c,sigma)
'''
if sigma!=1:
u = (c**(1-sigma)-1) / (1-sigma)
else:
u = np.log(c)
return u
def F(K, L=1, alpha=.3, A=1):
'''
Cobb-Douglas production function
F(K,L)=K^alpha L^(1-alpha)
'''
return A * K**alpha * L**(1-alpha)
def Va(k, alpha=.3, beta=.9):
ab = alpha*beta
return np.log(1-ab) / (1-beta) + ab * np.log(ab) / ((1-beta) * (1-ab)) + alpha * np.log(k) / (1-ab)
def opk(k, alpha=.3, beta=.9):
return alpha * beta * k**alpha
def opc(k, alpha=.3, beta=.9):
return (1-alpha*beta)*k**alpha
# %load optgrowthfuncs.py
def U(c, sigma=1):
'''This function returns the value of utility when the CRRA
coefficient is sigma. I.e.
u(c,sigma)=(c**(1-sigma)-1)/(1-sigma) if sigma!=1
and
u(c,sigma)=ln(c) if sigma==1
Usage: u(c,sigma)
'''
if sigma!=1:
u = (c**(1-sigma)-1) / (1-sigma)
else:
u = np.log(c)
return u
def F(K, L=1, alpha=.3, A=1):
'''
Cobb-Douglas production function
F(K,L)=K^alpha L^(1-alpha)
'''
return A * K**alpha * L**(1-alpha)
def Va(k, alpha=.3, beta=.9):
ab = alpha*beta
return np.log(1-ab) / (1-beta) + ab * np.log(ab) / ((1-beta) * (1-ab)) + alpha * np.log(k) / (1-ab)
def opk(k, alpha=.3, beta=.9):
return alpha * beta * k**alpha
def opc(k, alpha=.3, beta=.9):
return (1-alpha*beta)*k**alpha
Explanation: Optimal Growth
Let's start by computing the solution to an optimal growth problem, in which a social planner seeks to find paths ${c_t,k_t}$ such that
\begin{align}
\max_{{c_t,k_t}}&\sum_{t=0}^{\infty}\beta^{t}u(c_{t})\[.2cm]
\text{s.t. }&k_{t+1}\leq f(k_{t})+(1-\delta)k_{t}-c_{t}\[.2cm]
c_{t}\geq0,&\ k_{t}\geq0,\ k_{0}\text{ is given}.
\end{align}
As usual we assume that our utility function $u(\cdot)$ and production function $f(\cdot)$ are Neoclassical. Under these conditions we have seen that our problem satisfies the conditions of our previous theorem and thus we know a unique solution exists.
An example with analytical solution
Let's assume that $u(c)=\ln(c)$, $f(k)=k^\alpha$, and $\delta=1$. For this case we have seen that the solution implies
\begin{align}
&\text{Value Function: } & V(k)=&\frac{\ln(1-\alpha\beta)}{1-\beta}+\frac{\alpha\beta\ln(\alpha\beta)}{(1-\alpha\beta)(1-\beta)}+\frac{\alpha}{1-\alpha\beta}\ln(k)\[.2cm]
&\text{Optimal Policy: } & \pi\left(k\right)=&\beta\alpha k^{\alpha} \[.2cm]
&\text{Optimal Consumption Function: } & c=&\left(1-\beta\alpha\right)k^{\alpha}\[.2cm]
\end{align}
We will use these to compare the solution found by iteration of the Value function described above. Copy the Python functions you had defined in the previous notebook into the cell below and define Python functions for the actual optimal solutions given above.
End of explanation
alpha = .3
beta = .9
sigma = 1
delta = 1
Explanation: Let's fix the value of the fundamental parameters so we can realize computations
End of explanation
# Grid of values for state variable over which function will be approximated
gridmin, gridmax, gridsize = 0.1, 5, 300
grid = np.linspace(gridmin, gridmax**1e-1, gridsize)**10
Explanation: Now let's focus on the Value function iteration:
End of explanation
plt.hist(grid, bins=50);
plt.xlabel('State Space');
plt.ylabel('Number of Points');
plt.plot(grid, grid,'r.');
plt.title('State Space Grid');
Explanation: Here we have created a grid on $[gridmin,gridmax]$ that has a number of points given by gridsize. Since we know that the Value functions is stricly concave, our grid has more points closer to zero than farther away
End of explanation
from scipy.optimize import fminbound
fminbound?
Explanation: Now we need a function, which for given $V_0$ solves
$$\sup\limits _{y\in G(x)}\left{ U(x,y)+\beta V(y)\right}.$$
Let's use one of Scipy's optimizing routines
End of explanation
# Maximize function V on interval [a,b]
def maximum(V, a, b, **kwargs):
return float(V(fminbound(lambda x: -V(x), a, b, **kwargs)))
# Return Maximizer of function V on interval [a,b]
def maximizer(V, a, b, **kwargs):
return float(fminbound(lambda x: -V(x), a, b, **kwargs))
Explanation: Since fminbound returns
$$\arg\min\limits _{y\in [\underline x,\bar x]}\left{ U(x,y)+\beta V(y)\right}$$
we have to either replace our objective function for its negative or, better yet, define a function that uses fminbound and returns the maximum and the maximizer
End of explanation
# The following two functions are used to find the optimal policy and value functions using value function iteration
# Bellman Operator
def bellman(w):
The approximate Bellman operator.
Parameters: w is a LinInterp object (i.e., a
callable object which acts pointwise on arrays).
Returns: An instance of LinInterp that represents the optimal operator.
w is a function defined on the state space.
vals = []
for k in grid:
kmax = F(k, alpha=alpha) + (1-delta) * k
h = lambda kp: U(kmax - kp, sigma) + beta * w(kp)
vals.append(maximum(h, 0, kmax))
return LinInterp(grid, vals)
# Optimal policy
def policy(w):
For each function w, policy(w) returns the function that maximizes the
RHS of the Bellman operator.
Replace w for the Value function to get optimal policy.
The approximate optimal policy operator w-greedy (See Stachurski (2009)).
Parameters: w is a LinInterp object (i.e., a
callable object which acts pointwise on arrays).
Returns: An instance of LinInterp that captures the optimal policy.
vals = []
for k in grid:
kmax = F(k,alpha=alpha) + (1-delta) * k
h = lambda kp: U(kmax - kp,sigma) + beta * w(kp)
vals.append(maximizer(h, 0, kmax))
return LinInterp(grid, vals)
Explanation: Note
We could have included other parameters to pass to our maximizer and maximum functions, e.g. to allow us to manipulate the options of fminbound
The Bellman Operator
End of explanation
# Parameters for the optimization procedures
count = 0
maxiter = 1000
tol = 1e-6
print('tol=%f' % tol)
Explanation: Given a linear interpolation of our guess for the Value function, $V_0=w$, the first function returns a LinInterp object, which is the linear interpolation of the function generated by the Bellman Operator on the finite set of points on the grid. The second function returns what Stachurski (2009) calls a w-greedy policy, i.e. the function that maximizes the RHS of the Bellman Operator.
Now we are ready to work on the iteration
End of explanation
V0 = LinInterp(grid,U(grid))
plt.figure(1)
plt.plot(grid, V0(grid), label='V'+str(count));
plt.plot(grid, Va(grid), label='Actual');
plt.legend(loc=0);
Explanation: Our initial guess $V_0$
End of explanation
plt.plot(grid, V0(grid), label='V'+str(count));
count += 1
V0 = bellman(V0)
plt.figure(1)
plt.plot(grid, V0(grid), label='V'+str(count));
plt.plot(grid, Va(grid), label='Actual');
plt.legend(loc=0);
plt.show();
Explanation: After one interation
End of explanation
fig, ax = plt.subplots()
ax.set_xlim(grid.min(), grid.max())
ax.plot(grid, Va(grid), label='Actual', color='k', lw=2, alpha=0.6);
count=0
maxiter=200
tol=1e-6
while count<maxiter:
V1 = bellman(V0)
err = np.max(np.abs(np.array(V1(grid))-np.array(V0(grid))))
if np.mod(count,10)==0:
ax.plot(grid, V1(grid), color=plt.cm.jet(count / maxiter), lw=2, alpha=0.6);
#print('%d %2.10f ' % (count,err))
V0 = V1
count += 1
if err<tol:
print(count)
break
ax.plot(grid, V1(grid), label='Estimated', color='r', lw=2, alpha=0.6);
ax.legend(loc='lower right')
plt.draw();
fig
Explanation: Doing it by hand is too slow..let's automate this process
End of explanation
print(err)
err = Va(grid)-V1(grid)
plt.plot(grid,err);
print(err.max()-err.min())
fig, ax = plt.subplots()
ax.set_ylim(-10, -7)
ax.set_xlim(grid.min(), grid.max())
ax.plot(grid, Va(grid), label='Actual')
ax.plot(grid, V1(grid)+err.mean(), label='Estimated')
ax.legend(loc='lower right')
plt.show();
Explanation: Does it look like we converged? Let's compare our estimated Value function V1 and the actual function Va and compute the error at each point.
End of explanation
optcapital = policy(V1)
plt.figure(1)
plt.plot(grid, optcapital(grid), label='Estimated Policy Function');
plt.plot(grid, opk(grid), label='Actual Policy Function');
plt.legend(loc='lower right');
err = opk(grid)-optcapital(grid)
plt.plot(grid,err);
print(err.max()-err.min())
print(err.mean())
Explanation: Exercise
Use the policy function to compute the optimal policy. Compare it to the actual one
Do the same for the consumption function. Find the savings rate and plot it.
Construct the paths of consumption and capital starting from $k_0=.1$. Show the time series and the paths in the consumption-capital space
Estimate the level of steady state capital and consumption. Show graphically that it is lower than the Golden Rule Level.
Repeat the exercise with other values of $\alpha,\beta,\delta,\sigma,k_0$. Can you write a function or class such that it will generate the whole analysis for given values of the parameters and functions. Can you generalize it in order to analyze the effects of changing the utility or production functions?
Solution
Since we already have V1, we can just apply policy(V1) to get the result
End of explanation
def optcons(k, k1=None):
'''
Return value of consumption when capitakl today is k
'''
if k1 is None:
c = F(k) + (1-delta)*k - optcapital(k)
else:
c = F(k) + (1-delta)*k - k1
return c
plt.figure(1)
plt.plot(grid, optcons(grid), label='Estimated Consumption Function');
plt.plot(grid, opc(grid), label='Actual Consumption Function');
plt.legend(loc='lower right');
err = opc(grid)-optcons(grid)
plt.plot(grid,err);
print(err.max()-err.min())
print(err.mean())
Explanation: Since $c = f(k) + (1-\delta) k - k'$
End of explanation
def optsav(k):
'''
Estimated savings function
'''
s = F(k) - optcons(k)
return s
def ops(k):
s = F(k) - opc(k)
return s
plt.figure(1)
plt.plot(grid, optsav(grid), label='Estimated Savings Function');
plt.plot(grid, ops(grid), label='Actual Savings Function');
plt.legend(loc='lower right');
err = ops(grid)-optsav(grid)
plt.plot(grid,err);
print(err.max()-err.min())
print(err.mean())
Explanation: Since $s = f(k) - c = k' - (1-\delta) k = $ investment,
End of explanation
T = 20
kt = []
ct = []
k0 = 0.1
kt.append(k0)
for t in range(0,T):
k1 = optcapital(k0)
c0 = optcons(k0, k1)
k0 = k1
ct.append(c0)
kt.append(k0)
plt.plot(kt)
plt.xlabel(r'$t$')
plt.ylabel(r'$k_t$')
plt.plot(ct)
plt.xlabel(r'$t$')
plt.ylabel(r'$c_t$')
print(kt)
print(ct)
Explanation: Start with $k_0=0.1$ and follow the economy for 20 periods.
End of explanation
css = ct[-1]
kss = kt[-1]
print('Steady state capital is ', kss)
print('Steady state consumption is ', css)
k0 = 0.1
maxerr = 1e-14
maxiter = 1000
count = 0
while count<maxiter:
k1 = optcapital(k0)
err = np.abs(k1-k0)
if err<maxerr:
kss_true = k1
break
else:
k0 = k1
count += 1
css_true = optcons(kss, kss)
from IPython.display import display, Markdown
display(Markdown(
rf
True theoretical steady state value of capital {kss_true:1.5}.
Approximated numerical steady state value of capital {kss:1.5}.
True theoretical steady state value of consumption {css_true:1.5}.
Approximated numerical steady state value of consumption {css:1.5}.
Convergence to steady state in {count} iterations.
))
print(kss_true)
print(kss)
print(count)
print(css_true)
print(css)
Explanation: Steady state level of capital and consumption are given by these last elements of these lists
End of explanation |
8,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gather precovery imaging
This notebook shows how to get precovery imaging for objects found with KBMOD. Once we have an object
identified we can record the observations we used in MPC format and use the following tools to search
other telescope data for possible images where the object may be present to help orbit determination.
Precovery Classes
Here we describe the methods we have for identifying precovery images
Step1: Create Query URL from MPC formatted file
In the notebook orbit_fitting_demo.ipynb we show how to use the ephem_utils.py code in KBMOD to create a file with the observations for an identified object in KBMOD and turn it into an MPC formatted file. The file created in that demo is saved here as kbmod_mpc.dat. We will use that file to show how the precovery interface works.
Step2: Query service via URL
The formatted URL above will work in a browser to return results. But we have the query_ssois function that will pull down the results and provide them in a pandas dataframe all in one go.
Step3: Create direct data download link
It's possible to take the URLs for the data provided in results_df['MetaData'] and turn them directly into a download link clickable from here in the notebook.
Step4: Compare KBMOD data to available data | Python Code:
from precovery_utils import ssoisPrecovery
Explanation: Gather precovery imaging
This notebook shows how to get precovery imaging for objects found with KBMOD. Once we have an object
identified we can record the observations we used in MPC format and use the following tools to search
other telescope data for possible images where the object may be present to help orbit determination.
Precovery Classes
Here we describe the methods we have for identifying precovery images:
Solar System Object Imaging Search (SSOIS)
Currently our precovery_utils only has a class to use the SSOIS service
from the Canadian Astronomy Data Centre and described in Gwyn, Hill and Kavelaars (2012).
End of explanation
ssois_query = ssoisPrecovery()
query_url = ssois_query.format_search_by_arc_url('kbmod_mpc.dat')
print(query_url)
Explanation: Create Query URL from MPC formatted file
In the notebook orbit_fitting_demo.ipynb we show how to use the ephem_utils.py code in KBMOD to create a file with the observations for an identified object in KBMOD and turn it into an MPC formatted file. The file created in that demo is saved here as kbmod_mpc.dat. We will use that file to show how the precovery interface works.
End of explanation
results_df = ssois_query.query_ssois(query_url)
results_df.head()
Explanation: Query service via URL
The formatted URL above will work in a browser to return results. But we have the query_ssois function that will pull down the results and provide them in a pandas dataframe all in one go.
End of explanation
from IPython.display import HTML
image_data_link = results_df["MetaData"].iloc[-1]
HTML('<a href="{}">{}</a>'.format(image_data_link, image_data_link))
Explanation: Create direct data download link
It's possible to take the URLs for the data provided in results_df['MetaData'] and turn them directly into a download link clickable from here in the notebook.
End of explanation
%pylab inline
from ephem_utils import mpc_reader
kbmod_observations = mpc_reader('kbmod_mpc.dat')
scatter(kbmod_observations.ra.deg, kbmod_observations.dec.deg, marker='x', s=200, c='r', label='KBMOD Observations', zorder=10)
plt.legend()
scatter(results_df['Object_RA'], results_df['Object_Dec'], c=results_df['MJD'])
cbar = plt.colorbar()
plt.xlabel('RA')
plt.ylabel('Dec')
cbar.set_label('MJD')
Explanation: Compare KBMOD data to available data
End of explanation |
8,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center"> Introdução ao Processamento de Linguagem Natural (PLN) Usando Python </h1>
<h3 align="center"> Professor Fernando Vieira da Silva MSc.</h3>
<h2>Problema de Classificação</h2>
<p>Neste tutorial vamos trabalhar com um exemplo prático de problema de classificação de texto. O objetivo é identificar uma sentença como escrita "formal" ou "informal".</p>
<b>1. Obtendo o corpus</b>
<p>Para simplificar o problema, vamos continuar utilizando o corpus Gutenberg como textos formais e vamos usar mensagens de chat do corpus <b>nps_chat</b> como textos informais.</p>
<p>Antes de tudo, vamos baixar o corpus nps_chat
Step1: <p>Agora vamos ler os dois corpus e armazenar as sentenças em uma mesma ndarray. Perceba que também teremos uma ndarray para indicar se o texto é formal ou não. Começamos armazenando o corpus em lists. Vamos usar apenas 500 elementos de cada, para fins didáticos.</p>
Step2: <p>Em seguida, transformamos essas listas em ndarrays, para usarmos nas etapas de pré-processamento que já conhecemos.</p>
Step3: <b>2. Dividindo em datasets de treino e teste</b>
<p>Para que a pesquisa seja confiável, precisamos avaliar os resultados em um dataset de teste. Por isso, vamos dividir os dados aleatoriamente, deixando 80% para treino e o demais para testar os resultados em breve.</p>
Step4: <b>3. Treinando o classificador</b>
<p>Para tokenização, vamos usar a mesma função do tutorial anterior
Step5: <p>Mas agora vamos criar um <b>pipeline</b> contendo o vetorizador TF-IDF, o SVD para redução de atributos e um algoritmo de classificação. Mas antes, vamos encapsular nosso algoritmo para escolher o número de dimensões para o SVD em uma classe que pode ser utilizada com o pipeline
Step6: <p>Finalmente podemos criar nosso pipeline
Step7: <p>Estamos quase lá... Agora vamos criar um objeto <b>RandomizedSearchCV</b> que fará a seleção de hiper-parâmetros do nosso classificador (aka. parâmetros que não são aprendidos durante o treinamento). Essa etapa é importante para obtermos a melhor configuração do algoritmo de classificação. Para economizar tempo de treinamento, vamos usar um algoritmo simples o <i>K nearest neighbors (KNN)</i>.
Step8: <p>E agora vamos treinar nosso algoritmo, usando o pipeline com seleção de atributos
Step9: <b>4. Testando o classificador</b>
<p>Agora vamos usar o classificador com o nosso dataset de testes, e observar os resultados
Step10: <b>5. Serializando o modelo</b><br>
Step11: <b>6. Abrindo e usando um modelo salvo </b><br> | Python Code:
import nltk
nltk.download('nps_chat')
from nltk.corpus import nps_chat
print(nps_chat.fileids())
Explanation: <h1 align="center"> Introdução ao Processamento de Linguagem Natural (PLN) Usando Python </h1>
<h3 align="center"> Professor Fernando Vieira da Silva MSc.</h3>
<h2>Problema de Classificação</h2>
<p>Neste tutorial vamos trabalhar com um exemplo prático de problema de classificação de texto. O objetivo é identificar uma sentença como escrita "formal" ou "informal".</p>
<b>1. Obtendo o corpus</b>
<p>Para simplificar o problema, vamos continuar utilizando o corpus Gutenberg como textos formais e vamos usar mensagens de chat do corpus <b>nps_chat</b> como textos informais.</p>
<p>Antes de tudo, vamos baixar o corpus nps_chat:</p>
End of explanation
import nltk
x_data_nps = []
for fileid in nltk.corpus.nps_chat.fileids():
x_data_nps.extend([post.text for post in nps_chat.xml_posts(fileid)])
y_data_nps = [0] * len(x_data_nps)
x_data_gut = []
for fileid in nltk.corpus.gutenberg.fileids():
x_data_gut.extend([' '.join(sent) for sent in nltk.corpus.gutenberg.sents(fileid)])
y_data_gut = [1] * len(x_data_gut)
x_data_full = x_data_nps[:500] + x_data_gut[:500]
print(len(x_data_full))
y_data_full = y_data_nps[:500] + y_data_gut[:500]
print(len(y_data_full))
Explanation: <p>Agora vamos ler os dois corpus e armazenar as sentenças em uma mesma ndarray. Perceba que também teremos uma ndarray para indicar se o texto é formal ou não. Começamos armazenando o corpus em lists. Vamos usar apenas 500 elementos de cada, para fins didáticos.</p>
End of explanation
import numpy as np
x_data = np.array(x_data_full, dtype=object)
#x_data = np.array(x_data_full)
print(x_data.shape)
y_data = np.array(y_data_full)
print(y_data.shape)
Explanation: <p>Em seguida, transformamos essas listas em ndarrays, para usarmos nas etapas de pré-processamento que já conhecemos.</p>
End of explanation
train_indexes = np.random.rand(len(x_data)) < 0.80
print(len(train_indexes))
print(train_indexes[:10])
x_data_train = x_data[train_indexes]
y_data_train = y_data[train_indexes]
print(len(x_data_train))
print(len(y_data_train))
x_data_test = x_data[~train_indexes]
y_data_test = y_data[~train_indexes]
print(len(x_data_test))
print(len(y_data_test))
Explanation: <b>2. Dividindo em datasets de treino e teste</b>
<p>Para que a pesquisa seja confiável, precisamos avaliar os resultados em um dataset de teste. Por isso, vamos dividir os dados aleatoriamente, deixando 80% para treino e o demais para testar os resultados em breve.</p>
End of explanation
from nltk import pos_tag
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize
import string
from nltk.corpus import wordnet
stopwords_list = stopwords.words('english')
lemmatizer = WordNetLemmatizer()
def my_tokenizer(doc):
words = word_tokenize(doc)
pos_tags = pos_tag(words)
non_stopwords = [w for w in pos_tags if not w[0].lower() in stopwords_list]
non_punctuation = [w for w in non_stopwords if not w[0] in string.punctuation]
lemmas = []
for w in non_punctuation:
if w[1].startswith('J'):
pos = wordnet.ADJ
elif w[1].startswith('V'):
pos = wordnet.VERB
elif w[1].startswith('N'):
pos = wordnet.NOUN
elif w[1].startswith('R'):
pos = wordnet.ADV
else:
pos = wordnet.NOUN
lemmas.append(lemmatizer.lemmatize(w[0], pos))
return lemmas
Explanation: <b>3. Treinando o classificador</b>
<p>Para tokenização, vamos usar a mesma função do tutorial anterior:</p>
End of explanation
from sklearn.decomposition import TruncatedSVD
class SVDDimSelect(object):
def fit(self, X, y=None):
self.svd_transformer = TruncatedSVD(n_components=X.shape[1]/2)
self.svd_transformer.fit(X)
cummulative_variance = 0.0
k = 0
for var in sorted(self.svd_transformer.explained_variance_ratio_)[::-1]:
cummulative_variance += var
if cummulative_variance >= 0.5:
break
else:
k += 1
self.svd_transformer = TruncatedSVD(n_components=k)
return self.svd_transformer.fit(X)
def transform(self, X, Y=None):
return self.svd_transformer.transform(X)
def get_params(self, deep=True):
return {}
Explanation: <p>Mas agora vamos criar um <b>pipeline</b> contendo o vetorizador TF-IDF, o SVD para redução de atributos e um algoritmo de classificação. Mas antes, vamos encapsular nosso algoritmo para escolher o número de dimensões para o SVD em uma classe que pode ser utilizada com o pipeline:</p>
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
from sklearn import neighbors
clf = neighbors.KNeighborsClassifier(n_neighbors=10, weights='uniform')
my_pipeline = Pipeline([('tfidf', TfidfVectorizer(tokenizer=my_tokenizer)),\
('svd', SVDDimSelect()), \
('clf', clf)])
Explanation: <p>Finalmente podemos criar nosso pipeline:</p>
End of explanation
from sklearn.grid_search import RandomizedSearchCV
import scipy
par = {'clf__n_neighbors': range(1, 60), 'clf__weights': ['uniform', 'distance']}
hyperpar_selector = RandomizedSearchCV(my_pipeline, par, cv=3, scoring='accuracy', n_jobs=2, n_iter=20)
Explanation: <p>Estamos quase lá... Agora vamos criar um objeto <b>RandomizedSearchCV</b> que fará a seleção de hiper-parâmetros do nosso classificador (aka. parâmetros que não são aprendidos durante o treinamento). Essa etapa é importante para obtermos a melhor configuração do algoritmo de classificação. Para economizar tempo de treinamento, vamos usar um algoritmo simples o <i>K nearest neighbors (KNN)</i>.
End of explanation
#print(hyperpar_selector)
hyperpar_selector.fit(X=x_data_train, y=y_data_train)
print("Best score: %0.3f" % hyperpar_selector.best_score_)
print("Best parameters set:")
best_parameters = hyperpar_selector.best_estimator_.get_params()
for param_name in sorted(par.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))
Explanation: <p>E agora vamos treinar nosso algoritmo, usando o pipeline com seleção de atributos:</p>
End of explanation
from sklearn.metrics import *
y_pred = hyperpar_selector.predict(x_data_test)
print(accuracy_score(y_data_test, y_pred))
Explanation: <b>4. Testando o classificador</b>
<p>Agora vamos usar o classificador com o nosso dataset de testes, e observar os resultados:</p>
End of explanation
import pickle
string_obj = pickle.dumps(hyperpar_selector)
model_file = open('model.pkl', 'wb')
model_file.write(string_obj)
model_file.close()
Explanation: <b>5. Serializando o modelo</b><br>
End of explanation
model_file = open('model.pkl', 'rb')
model_content = model_file.read()
obj_classifier = pickle.loads(model_content)
model_file.close()
res = obj_classifier.predict(["what's up bro?"])
print(res)
res = obj_classifier.predict(x_data_test)
print(accuracy_score(y_data_test, res))
res = obj_classifier.predict(x_data_test)
print(res)
formal = [x_data_test[i] for i in range(len(res)) if res[i] == 1]
for txt in formal:
print("%s\n" % txt)
informal = [x_data_test[i] for i in range(len(res)) if res[i] == 0]
for txt in informal:
print("%s\n" % txt)
res2 = obj_classifier.predict(["Emma spared no exertions to maintain this happier flow of ideas , and hoped , by the help of backgammon , to get her father tolerably through the evening , and be attacked by no regrets but her own"])
print(res2)
Explanation: <b>6. Abrindo e usando um modelo salvo </b><br>
End of explanation |
8,338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Combining different machine learning algorithms into an ensemble model
Model ensembling is a class of techniques for aggregating together multiple different predictive algorithm into a sort of mega-algorithm, which can often increase the accuracy and reduce the overfitting of your model. Ensembling approaches often work surprisingly well. Many winners of competitive data science competitions use model ensembling in one form or another. In this tutorial, we will take you through the steps of building your own ensemble of a random forest, support vector machine, and neural network for doing a classification problem. We’ll be working on the famous spam dataset and trying to predict whether a certain email is spam or not, and using the standard Python machine learning stack (scikit/numpy/pandas).
You have probably already encountered several uses of model ensembling. Random forests are a type of ensemble algorithm that aggregates together many individual tree base learners. If you’re interested in deep learning, one common technique for improving classification accuracies is training different networks and getting them to vote on classifications for test instances (look at dropout for a related but wacky take on ensembling). If you’re familiar with bagging or boosting algorithms, these are very explicit examples of ensembling.
Regardless of the specifics, the general idea behind ensembling is this
Step1: 2. Cleaning up and summarizing the data
Lookin' good! Let's convert the data into a nice format. We rearrange some columns, check out what the columns are.
Step2: 3) Splitting data into training and testing sets
Our day is now nice and squeaky clean! This definitely always happens in real life.
Next up, let's scale the data and split it into a training and test set.
Step3: 4. Running algorithms on the data
Blah blah now it's time to train algorithms. We are doing binary classification. Could ahve also used logistic regression, kNN, etc etc.
4.1 Random forests
Let’s build a random forest. A great explanation of random forests can be found here. Briefly, random forests build a collection of classification trees, which each try to predict classes by recursively splitting the data on features that split classes best. Each tree is trained on bootstrapped data, and each split is only allowed to use certain variables. So, an element of randomness is introduced, a variety of different trees are built, and the 'random forest' ensembles together these base learners.
A hyperparameter is something than influences the performance of your model, but isn't directly tuned during model training. The main hyperparameters to adjust for random forrests are n_estimators and max_features. n_estimators controls the number of trees in the forest - the more the better, but more trees comes at the expense of longer training time. max_features controls the size of the random selection of features the algorithm is allowed to consider when splitting a node.
We could also choose to tune various other hyperpramaters, like max_depth (the maximum depth of a tree, which controls how tall we grow our trees and influences overfitting) and the choice of the purity criterion (which are specific formulas for calculating how good or 'pure' our splits make the terminal nodes).
We are doing gridsearch to find optimal hyperparameter values, which tries out each given value for each hyperparameter of interst and sees how well it performs using (in this case) 10-fold cross-validation (CV). As a reminder, in cross-validation we try to estimate the test-set performance for a model; in k-fold CV, the estimate is done by repeatedly partitioning the dataset into k parts and 'testing' on 1/kth of it. We could have also tuned our hyperparameters using randomized search, which samples some values from a distribution rather than trying out all given values. Either is probably fine.
The following code block takes about a minute to run.
Step4: 93-95% accuracy, not too shabby! Have a look and see how random forests with suboptimal hyperparameters fare. We got around 91-92% accuracy on the out of the box (untuned) random forests, which actually isn't terrible.
2) Second algorithm
Step5: Looks good! This is similar performance to what we saw in the random forests.
3) Third algorithm
Step6: Looks like this neural network (given this dataset, architecture, and hyperparameterisation) is doing slightly worse on the spam dataset. That's okay, it could still be picking up on a signal that the random forest and SVM weren't.
Machine learning algorithns... ensemble!
4) Majority vote on classifications | Python Code:
import pandas as pd
import numpy as np
# Import the dataset
dataset_path = "spam_dataset.csv"
dataset = pd.read_csv(dataset_path, sep=",")
# Take a peak at the data
dataset.head()
Explanation: Combining different machine learning algorithms into an ensemble model
Model ensembling is a class of techniques for aggregating together multiple different predictive algorithm into a sort of mega-algorithm, which can often increase the accuracy and reduce the overfitting of your model. Ensembling approaches often work surprisingly well. Many winners of competitive data science competitions use model ensembling in one form or another. In this tutorial, we will take you through the steps of building your own ensemble of a random forest, support vector machine, and neural network for doing a classification problem. We’ll be working on the famous spam dataset and trying to predict whether a certain email is spam or not, and using the standard Python machine learning stack (scikit/numpy/pandas).
You have probably already encountered several uses of model ensembling. Random forests are a type of ensemble algorithm that aggregates together many individual tree base learners. If you’re interested in deep learning, one common technique for improving classification accuracies is training different networks and getting them to vote on classifications for test instances (look at dropout for a related but wacky take on ensembling). If you’re familiar with bagging or boosting algorithms, these are very explicit examples of ensembling.
Regardless of the specifics, the general idea behind ensembling is this: different classes of algorithms (or differently parameterized versions of the same type of algorithm) might be good at picking up on different signals in the dataset. Combining them means that you can model the data better, leading to better predictions. Furthermore, different algorithms might be overfitting to the data in various ways, but by combining them, you can effectively average away some of this overfitting.
We won’t do fancy visualizations of the dataset here. Check out this tutorial or our bootcamp to learn Plotly and matplotlib. Here, we are focused on optimizing different algorithms and combining them to boost performance.
Let's get started!
1. Loading up the data
Load dataset. We often want our input data to be a matrix (X) and the vector of instance labels as a separate vector (y).
End of explanation
# Reorder the data columns and drop email_id
cols = dataset.columns.tolist()
cols = cols[2:] + [cols[1]]
dataset = dataset[cols]
# Examine shape of dataset and some column names
print dataset.shape
print dataset.columns.values[0:10]
# Summarise feature values
dataset.describe()
# Convert dataframe to numpy array and split
# data into input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-1].astype(float)
y = npArray[:,-1]
Explanation: 2. Cleaning up and summarizing the data
Lookin' good! Let's convert the data into a nice format. We rearrange some columns, check out what the columns are.
End of explanation
from sklearn import preprocessing
from sklearn.cross_validation import train_test_split
# Scale and split dataset
X_scaled = preprocessing.scale(X)
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X_scaled, y, random_state=1)
Explanation: 3) Splitting data into training and testing sets
Our day is now nice and squeaky clean! This definitely always happens in real life.
Next up, let's scale the data and split it into a training and test set.
End of explanation
from sklearn import metrics
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Search for good hyperparameter values
# Specify values to grid search over
n_estimators = np.arange(1, 30, 5)
max_features = np.arange(1, X.shape[1], 10)
max_depth = np.arange(1, 100, 10)
hyperparameters = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth}
# Grid search using cross-validation
gridCV = GridSearchCV(RandomForestClassifier(), param_grid=hyperparameters, cv=10, n_jobs=4)
gridCV.fit(XTrain, yTrain)
best_n_estim = gridCV.best_params_['n_estimators']
best_max_features = gridCV.best_params_['max_features']
best_max_depth = gridCV.best_params_['max_depth']
# Train classifier using optimal hyperparameter values
# We could have also gotten this model out from gridCV.best_estimator_
clfRDF = RandomForestClassifier(n_estimators=best_n_estim, max_features=best_max_features, max_depth=best_max_depth)
clfRDF.fit(XTrain, yTrain)
RF_predictions = clfRDF.predict(XTest)
print (metrics.classification_report(yTest, RF_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))
Explanation: 4. Running algorithms on the data
Blah blah now it's time to train algorithms. We are doing binary classification. Could ahve also used logistic regression, kNN, etc etc.
4.1 Random forests
Let’s build a random forest. A great explanation of random forests can be found here. Briefly, random forests build a collection of classification trees, which each try to predict classes by recursively splitting the data on features that split classes best. Each tree is trained on bootstrapped data, and each split is only allowed to use certain variables. So, an element of randomness is introduced, a variety of different trees are built, and the 'random forest' ensembles together these base learners.
A hyperparameter is something than influences the performance of your model, but isn't directly tuned during model training. The main hyperparameters to adjust for random forrests are n_estimators and max_features. n_estimators controls the number of trees in the forest - the more the better, but more trees comes at the expense of longer training time. max_features controls the size of the random selection of features the algorithm is allowed to consider when splitting a node.
We could also choose to tune various other hyperpramaters, like max_depth (the maximum depth of a tree, which controls how tall we grow our trees and influences overfitting) and the choice of the purity criterion (which are specific formulas for calculating how good or 'pure' our splits make the terminal nodes).
We are doing gridsearch to find optimal hyperparameter values, which tries out each given value for each hyperparameter of interst and sees how well it performs using (in this case) 10-fold cross-validation (CV). As a reminder, in cross-validation we try to estimate the test-set performance for a model; in k-fold CV, the estimate is done by repeatedly partitioning the dataset into k parts and 'testing' on 1/kth of it. We could have also tuned our hyperparameters using randomized search, which samples some values from a distribution rather than trying out all given values. Either is probably fine.
The following code block takes about a minute to run.
End of explanation
from sklearn.svm import SVC
# Search for good hyperparameter values
# Specify values to grid search over
g_range = 2. ** np.arange(-15, 5, step=2)
C_range = 2. ** np.arange(-5, 15, step=2)
hyperparameters = [{'gamma': g_range,
'C': C_range}]
# Grid search using cross-validation
grid = GridSearchCV(SVC(), param_grid=hyperparameters, cv= 10)
grid.fit(XTrain, yTrain)
bestG = grid.best_params_['gamma']
bestC = grid.best_params_['C']
# Train SVM and output predictions
rbfSVM = SVC(kernel='rbf', C=bestC, gamma=bestG)
rbfSVM.fit(XTrain, yTrain)
SVM_predictions = rbfSVM.predict(XTest)
print metrics.classification_report(yTest, SVM_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, SVM_predictions),2)
Explanation: 93-95% accuracy, not too shabby! Have a look and see how random forests with suboptimal hyperparameters fare. We got around 91-92% accuracy on the out of the box (untuned) random forests, which actually isn't terrible.
2) Second algorithm: support vector machines
Let's train our second algorithm, support vector machines (SVMs) to do the same exact prediction task. A great introduction to the theory behind SVMs can be read here. Briefly, SVMs search for hyperplanes in the feature space which best divide the different classes in your dataset. Crucially, SVMs can find non-linear decision boundaries between classes using a process called kernelling, which projects the data into a higher-dimensional space. This sounds a bit abstract, but if you've ever fit a linear regression to power-transformed variables (e.g. maybe you used x^2, x^3 as features), you're already familiar with the concept.
SVMs can use different types of kernels, like Gaussian or radial ones, to throw the data into a different space. The main hyperparameters we must tune for SVMs are gamma (a kernel parameter, controlling how far we 'throw' the data into the new feature space) and C (which controls the bias-variance tradeoff of the model).
End of explanation
from multilayer_perceptron import multilayer_perceptron
# Search for good hyperparameter values
# Specify values to grid search over
layer_size_range = [(3,2),(10,10),(2,2,2),10,5] # different networks shapes
learning_rate_range = np.linspace(.1,1,3)
hyperparameters = [{'hidden_layer_sizes': layer_size_range, 'learning_rate_init': learning_rate_range}]
# Grid search using cross-validation
grid = GridSearchCV(multilayer_perceptron.MultilayerPerceptronClassifier(), param_grid=hyperparameters, cv=10)
grid.fit(XTrain, yTrain)
# Output best hyperparameter values
best_size = grid.best_params_['hidden_layer_sizes']
best_best_lr = grid.best_params_['learning_rate_init']
# Train neural network and output predictions
nnet = multilayer_perceptron.MultilayerPerceptronClassifier(hidden_layer_sizes=best_size, learning_rate_init=best_best_lr)
nnet.fit(XTrain, yTrain)
NN_predictions = nnet.predict(XTest)
print metrics.classification_report(yTest, NN_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, NN_predictions),2)
Explanation: Looks good! This is similar performance to what we saw in the random forests.
3) Third algorithm: neural network
Finally, let's jump on the hype wagon and throw neural networks at our problem.
Neural networks (NNs) represent a different way of thinking about machine learning algorithms. A great place to start learning about neural networks and deep learning is this resource. Briefly, NNs are composed of multiple layers of artificial neurons, which individually are simple processing units that weigh up input data. Together, layers of neurons can work together to compute some very complex functions of the data, which in turn can make excellent predictions. You may be aware of some of the crazy results that NN research has recently achieved.
Here, we train a shallow, fully-connected, feedforward neural network on the spam dataset. Other types of neural network implementations in scikit are available here. The hyperparameters we optimize here are the overall architecture (number of neurons in each layer and the number of layers) and the learning rate (which controls how quickly the parameters in our network change during the training phase; see gradient descent and backpropagation).
End of explanation
# here's a rough solution
import collections
# stick all predictions into a dataframe
predictions = pd.DataFrame(np.array([RF_predictions, SVM_predictions, NN_predictions])).T
predictions.columns = ['RF', 'SVM', 'NN']
predictions = pd.DataFrame(np.where(predictions=='yes', 1, 0),
columns=predictions.columns,
index=predictions.index)
# initialise empty array for holding predictions
ensembled_predictions = np.zeros(shape=yTest.shape)
# majority vote and output final predictions
for test_point in range(predictions.shape[0]):
predictions.iloc[test_point,:]
counts = collections.Counter(predictions.iloc[test_point,:])
majority_vote = counts.most_common(1)[0][0]
# output votes
ensembled_predictions[test_point] = majority_vote.astype(int)
print "The majority vote for test point", test_point, "is: ", majority_vote
# Get final accuracy of ensembled model
yTest[yTest == "yes"] = 1
yTest[yTest == "no"] = 0
print metrics.classification_report(yTest.astype(int), ensembled_predictions.astype(int))
print "Ensemble Accuracy:", round(metrics.accuracy_score(yTest.astype(int), ensembled_predictions.astype(int)),2)
Explanation: Looks like this neural network (given this dataset, architecture, and hyperparameterisation) is doing slightly worse on the spam dataset. That's okay, it could still be picking up on a signal that the random forest and SVM weren't.
Machine learning algorithns... ensemble!
4) Majority vote on classifications
End of explanation |
8,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read CA CSV
Import directives
Step1: Export/import data (write/read files)
See http
Step2: CSV files
See http
Step3: Setting more options
Step4: Read CSV files
See http
Step5: Setting more options
Step6: JSON files
See http
Step7: Write JSON files
See http
Step8: Setting orient="split"
Step9: Setting orient="records"
Step10: Setting orient="index" (the default option for Series)
Step11: Setting orient="columns" (the default option for DataFrame) (for DataFrame only)
Step12: Setting orient="values" (for DataFrame only)
Step13: Setting more options
Step14: Read JSON files
See http
Step15: Using orient="records"
List like [{column -> value}, ... , {column -> value}]
Step16: Using orient="index"
Dict like {index -> {column -> value}}
Step17: Using orient="columns"
Dict like {column -> {index -> value}}
Step18: Using orient="values" (for DataFrame only)
Just the values array
Step19: Setting more options
Step20: Other file formats
Many other file formats can be used to import or export data with JSON.
See the following link for more information
Step21: Select rows
Step22: Select over index
Step23: Select rows and columns
Step24: Apply a function to selected colunms values
Step25: Apply a function to selected rows values
Step26: Merge
See
Step27: Merge with NaN
Step28: Merge with missing rows
Step29: GroupBy
See
Step30: GroupBy with single key
Step31: GroupBy with multiple keys
Step32: Count the number of occurrences of a column value
Step33: Count the number of NaN values in a column
Step34: Plot
See https
Step35: Line plot
Step36: or
Step37: Bar plot
Step38: Vertical
Step39: Horizontal
Step40: Histogram
Step41: Box plot
Step42: Hexbin plot
Step43: Kernel Density Estimation (KDE) plot
Step44: Area plot
Step45: Pie chart
Step46: Scatter plot | Python Code:
%matplotlib inline
#%matplotlib notebook
from IPython.display import display
import matplotlib
matplotlib.rcParams['figure.figsize'] = (9, 9)
import pandas as pd
import numpy as np
!head -n30 /Users/jdecock/Downloads/CA20170725_1744.CSV
#df = pd.read_csv("/Users/jdecock/Downloads/CA20170725_1744.CSV")
df = pd.read_csv("/Users/jdecock/Downloads/CA20170725_1744.CSV",
sep=';',
index_col=0,
usecols=range(4), # the last column is empty...
skiprows=9,
parse_dates=False,
infer_datetime_format=False,
keep_date_col=False,
date_parser=None,
dayfirst=False,
thousands=None,
decimal=',',
escapechar=None,
encoding='iso-8859-1')
df
df.columns
df['Débit Euros'].plot()
Explanation: Read CA CSV
Import directives
End of explanation
data_array = np.array([[1, 2, 3], [4, 5, 6]])
df = pd.DataFrame(data_array, index=[10, 20], columns=[100, 200, 300])
df
Explanation: Export/import data (write/read files)
See http://pandas.pydata.org/pandas-docs/stable/io.html
Reader functions are accessibles from the top level pd object.
Writer functions are accessibles from data objects (i.e. Series, DataFrame or Panel objects).
End of explanation
df.to_csv(path_or_buf="python_pandas_io_test.csv")
!cat python_pandas_io_test.csv
Explanation: CSV files
See http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
Write CSV files
See http://pandas.pydata.org/pandas-docs/stable/io.html#io-store-in-csv
Simplest version:
End of explanation
# FYI, many other options are available
df.to_csv(path_or_buf="python_pandas_io_test.csv",
sep=',',
columns=None,
header=True,
index=True,
index_label=None,
compression=None, # allowed values are 'gzip', 'bz2' or 'xz'
date_format=None)
!cat python_pandas_io_test.csv
Explanation: Setting more options:
End of explanation
df = pd.read_csv("python_pandas_io_test.csv")
df
Explanation: Read CSV files
See http://pandas.pydata.org/pandas-docs/stable/io.html#io-read-csv-table
Simplest version:
End of explanation
df = pd.read_csv("python_pandas_io_test.csv",
sep=',',
delimiter=None,
header='infer',
names=None,
index_col=0,
usecols=None,
squeeze=False,
prefix=None,
mangle_dupe_cols=True,
dtype=None,
engine=None,
converters=None,
true_values=None,
false_values=None,
skipinitialspace=False,
skiprows=None,
nrows=None,
na_values=None,
keep_default_na=True,
na_filter=True,
verbose=False,
skip_blank_lines=True,
parse_dates=False,
infer_datetime_format=False,
keep_date_col=False,
date_parser=None,
dayfirst=False,
iterator=False,
chunksize=None,
compression='infer',
thousands=None,
decimal=b'.',
lineterminator=None,
quotechar='"',
quoting=0,
escapechar=None,
comment=None,
encoding=None,
dialect=None,
tupleize_cols=False,
error_bad_lines=True,
warn_bad_lines=True,
skipfooter=0,
skip_footer=0,
doublequote=True,
delim_whitespace=False,
as_recarray=False,
compact_ints=False,
use_unsigned=False,
low_memory=True,
buffer_lines=None,
memory_map=False,
float_precision=None)
df
!rm python_pandas_io_test.csv
Explanation: Setting more options:
End of explanation
import io
Explanation: JSON files
See http://pandas.pydata.org/pandas-docs/stable/io.html#json
End of explanation
df.to_json(path_or_buf="python_pandas_io_test.json")
!cat python_pandas_io_test.json
Explanation: Write JSON files
See http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-writer
Simplest version
End of explanation
df.to_json(path_or_buf="python_pandas_io_test_split.json",
orient="split")
!cat python_pandas_io_test_split.json
Explanation: Setting orient="split"
End of explanation
df.to_json(path_or_buf="python_pandas_io_test_records.json",
orient="records")
!cat python_pandas_io_test_records.json
Explanation: Setting orient="records"
End of explanation
df.to_json(path_or_buf="python_pandas_io_test_index.json",
orient="index")
!cat python_pandas_io_test_index.json
Explanation: Setting orient="index" (the default option for Series)
End of explanation
df.to_json(path_or_buf="python_pandas_io_test_columns.json",
orient="columns")
!cat python_pandas_io_test_columns.json
Explanation: Setting orient="columns" (the default option for DataFrame) (for DataFrame only)
End of explanation
df.to_json(path_or_buf="python_pandas_io_test_values.json",
orient="values")
!cat python_pandas_io_test_values.json
Explanation: Setting orient="values" (for DataFrame only)
End of explanation
# FYI, many other options are available
df.to_json(path_or_buf="python_pandas_io_test.json",
orient='columns', # For DataFrame: 'split','records','index','columns' or 'values'
date_format=None, # None, 'epoch' or 'iso'
double_precision=10,
force_ascii=True,
date_unit='ms')
!cat python_pandas_io_test.json
Explanation: Setting more options
End of explanation
!cat python_pandas_io_test_split.json
df = pd.read_json("python_pandas_io_test_split.json",
orient="split")
df
Explanation: Read JSON files
See http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-reader
Using orient="split"
Dict like data {index -> [index], columns -> [columns], data -> [values]}
End of explanation
!cat python_pandas_io_test_records.json
df = pd.read_json("python_pandas_io_test_records.json",
orient="records")
df
Explanation: Using orient="records"
List like [{column -> value}, ... , {column -> value}]
End of explanation
!cat python_pandas_io_test_index.json
df = pd.read_json("python_pandas_io_test_index.json",
orient="index")
df
Explanation: Using orient="index"
Dict like {index -> {column -> value}}
End of explanation
!cat python_pandas_io_test_columns.json
df = pd.read_json("python_pandas_io_test_columns.json",
orient="columns")
df
Explanation: Using orient="columns"
Dict like {column -> {index -> value}}
End of explanation
!cat python_pandas_io_test_values.json
df = pd.read_json("python_pandas_io_test_values.json",
orient="values")
df
Explanation: Using orient="values" (for DataFrame only)
Just the values array
End of explanation
df = pd.read_json("python_pandas_io_test.json",
orient=None,
typ='frame',
dtype=True,
convert_axes=True,
convert_dates=True,
keep_default_dates=True,
numpy=False,
precise_float=False,
date_unit=None,
encoding=None,
lines=False)
df
!rm python_pandas_io_test*.json
Explanation: Setting more options
End of explanation
data_array = np.array([np.arange(1, 10, 1), np.arange(10, 100, 10), np.arange(100, 1000, 100)]).T
df = pd.DataFrame(data_array,
index=np.arange(1, 10, 1),
columns=['A', 'B', 'C'])
df
df.B
df["B"]
df.loc[:,"B"]
df.loc[:,['A','B']]
Explanation: Other file formats
Many other file formats can be used to import or export data with JSON.
See the following link for more information: http://pandas.pydata.org/pandas-docs/stable/io.html
Select columns
End of explanation
data_array = np.array([np.arange(1, 10, 1), np.arange(10, 100, 10), np.arange(100, 1000, 100)]).T
df = pd.DataFrame(data_array,
index=np.arange(1, 10, 1),
columns=['A', 'B', 'C'])
df
df.B < 50.
df[df.B < 50.]
Explanation: Select rows
End of explanation
df.iloc[:5]
Explanation: Select over index: select the 5 first rows
End of explanation
data_array = np.array([np.arange(1, 10, 1), np.arange(10, 100, 10), np.arange(100, 1000, 100)]).T
df = pd.DataFrame(data_array,
index=np.arange(1, 10, 1),
columns=['A', 'B', 'C'])
df
df[df.B < 50][df.A >= 2].loc[:,['A','B']]
Explanation: Select rows and columns
End of explanation
data_array = np.array([np.arange(1, 10, 1), np.arange(10, 100, 10), np.arange(100, 1000, 100)]).T
df = pd.DataFrame(data_array,
index=np.arange(1, 10, 1),
columns=['A', 'B', 'C'])
df
df.B *= 2.
df
df.B = pow(df.B, 2)
df
Explanation: Apply a function to selected colunms values
End of explanation
data_array = np.array([np.arange(1, 10, 1), np.arange(10, 100, 10), np.arange(100, 1000, 100)]).T
df = pd.DataFrame(data_array,
index=np.arange(1, 10, 1),
columns=['A', 'B', 'C'])
df
df[df.B < 50.] *= -1.
df
df[df.B < 50.] = pow(df[df.B < 50.], 2)
df
Explanation: Apply a function to selected rows values
End of explanation
a1 = np.array([np.arange(1, 5, 1), np.arange(10, 50, 10), np.arange(100, 500, 100)]).T
df1 = pd.DataFrame(a1,
columns=['ID', 'B', 'C'])
a2 = np.array([np.arange(1, 5, 1), np.arange(1000, 5000, 1000), np.arange(10000, 50000, 10000)]).T
df2 = pd.DataFrame(a2,
columns=['ID', 'B', 'C'])
display(df1)
display(df2)
df = pd.merge(df1, df2, on="ID", suffixes=('_1', '_2')) #.dropna(how='any')
display(df)
Explanation: Merge
See: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html#pandas.merge
End of explanation
a1 = np.array([np.arange(1, 5, 1), np.arange(10, 50, 10), np.arange(100, 500, 100)]).T
df1 = pd.DataFrame(a1,
columns=['ID', 'B', 'C'])
a2 = np.array([np.arange(1, 5, 1), np.arange(1000, 5000, 1000), np.arange(10000, 50000, 10000)]).T
df2 = pd.DataFrame(a2,
columns=['ID', 'B', 'C'])
df1.iloc[0,2] = np.nan
df1.iloc[1,1] = np.nan
df1.iloc[2,2] = np.nan
df1.iloc[3,1] = np.nan
df2.iloc[0,1] = np.nan
df2.iloc[1,2] = np.nan
df2.iloc[2,1] = np.nan
df2.iloc[3,2] = np.nan
df = pd.merge(df1, df2, on="ID", suffixes=('_1', '_2')) #.dropna(how='any')
display(df1)
display(df2)
display(df)
Explanation: Merge with NaN
End of explanation
a1 = np.array([np.arange(1, 5, 1), np.arange(10, 50, 10), np.arange(100, 500, 100)]).T
df1 = pd.DataFrame(a1,
columns=['ID', 'B', 'C'])
a2 = np.array([np.arange(1, 3, 1), np.arange(1000, 3000, 1000), np.arange(10000, 30000, 10000)]).T
df2 = pd.DataFrame(a2,
columns=['ID', 'B', 'C'])
display(df1)
display(df2)
print("Left: use only keys from left frame (SQL: left outer join)")
df = pd.merge(df1, df2, on="ID", how="left", suffixes=('_1', '_2')) #.dropna(how='any')
display(df)
print("Right: use only keys from right frame (SQL: right outer join)")
df = pd.merge(df1, df2, on="ID", how="right", suffixes=('_1', '_2')) #.dropna(how='any')
display(df)
print("Inner: use intersection of keys from both frames (SQL: inner join) [DEFAULT]")
df = pd.merge(df1, df2, on="ID", how="inner", suffixes=('_1', '_2')) #.dropna(how='any')
display(df)
print("Outer: use union of keys from both frames (SQL: full outer join)")
df = pd.merge(df1, df2, on="ID", how="outer", suffixes=('_1', '_2')) #.dropna(how='any')
display(df)
Explanation: Merge with missing rows
End of explanation
a = np.array([[3, 5, 5, 5, 7, 7, 7, 7],
[2, 4, 4, 3, 1, 3, 3, 2],
[3, 4, 5, 6, 1, 8, 9, 8]]).T
df = pd.DataFrame(a,
columns=['A', 'B', 'C'])
df
Explanation: GroupBy
See: http://pandas.pydata.org/pandas-docs/stable/groupby.html
End of explanation
df.groupby(["A"]).count()
df.groupby(["A"]).sum().B
df.groupby(["A"]).mean().B
Explanation: GroupBy with single key
End of explanation
df.groupby(["A","B"]).count()
Explanation: GroupBy with multiple keys
End of explanation
df.A.value_counts()
df.A.value_counts().plot.bar()
Explanation: Count the number of occurrences of a column value
End of explanation
a = np.array([[3, np.nan, 5, np.nan, 7, 7, 7, 7],
[2, 4, 4, 3, 1, 3, 3, 2],
[3, 4, 5, 6, 1, 8, 9, 8]]).T
df = pd.DataFrame(a,
columns=['A', 'B', 'C'])
df
df.A.isnull().sum()
Explanation: Count the number of NaN values in a column
End of explanation
#help(df.plot)
Explanation: Plot
See https://pandas.pydata.org/pandas-docs/stable/visualization.html
End of explanation
x = np.arange(0, 6, 0.1)
y1 = np.cos(x)
y2 = np.sin(x)
Y = np.array([y1, y2]).T
df = pd.DataFrame(Y,
columns=['cos(x)', 'sin(x)'],
index=x)
df.iloc[:10]
df.plot(legend=True)
Explanation: Line plot
End of explanation
df.plot.line(legend=True)
Explanation: or
End of explanation
x = np.arange(0, 6, 0.5)
y1 = np.cos(x)
y2 = np.sin(x)
Y = np.array([y1, y2]).T
df = pd.DataFrame(Y,
columns=['cos(x)', 'sin(x)'],
index=x)
df
Explanation: Bar plot
End of explanation
df.plot.bar(legend=True)
df.plot.bar(legend=True, stacked=True)
Explanation: Vertical
End of explanation
df.plot.barh(legend=True)
Explanation: Horizontal
End of explanation
x1 = np.random.normal(size=(10000))
x2 = np.random.normal(loc=3, scale=2, size=(10000))
X = np.array([x1, x2]).T
df = pd.DataFrame(X, columns=[r'$\mathcal{N}(0,1)$', r'$\mathcal{N}(3,2)$'])
df.plot.hist(alpha=0.2, bins=100, legend=True)
Explanation: Histogram
End of explanation
x1 = np.random.normal(size=(10000))
x2 = np.random.normal(loc=3, scale=2, size=(10000))
X = np.array([x1, x2]).T
df = pd.DataFrame(X, columns=[r'$\mathcal{N}(0,1)$', r'$\mathcal{N}(3,2)$'])
df.plot.box()
Explanation: Box plot
End of explanation
df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b'])
df['b'] = df['b'] + np.arange(1000)
df.plot.hexbin(x='a', y='b', gridsize=25)
Explanation: Hexbin plot
End of explanation
x1 = np.random.normal(size=(10000))
x2 = np.random.normal(loc=3, scale=2, size=(10000))
X = np.array([x1, x2]).T
df = pd.DataFrame(X, columns=[r'$\mathcal{N}(0,1)$', r'$\mathcal{N}(3,2)$'])
df.plot.kde()
Explanation: Kernel Density Estimation (KDE) plot
End of explanation
df = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])
df.plot.area()
Explanation: Area plot
End of explanation
x = np.random.randint(low=0, high=6, size=(50))
df = pd.DataFrame(x, columns=["A"])
df.A.value_counts()
df.A.value_counts().plot.pie(y="A")
Explanation: Pie chart
End of explanation
x1 = np.random.normal(size=(10000))
x2 = np.random.normal(loc=3, scale=2, size=(10000))
X = np.array([x1, x2]).T
df = pd.DataFrame(X, columns=[r'$\mathcal{N}(0,1)$', r'$\mathcal{N}(3,2)$'])
df.plot.scatter(x=r'$\mathcal{N}(0,1)$',
y=r'$\mathcal{N}(3,2)$',
alpha=0.2)
Explanation: Scatter plot
End of explanation |
8,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
|S.No| Package | Comments |
|---|---|---|
|1| pandas | provides data structures (such as DataFrame) to <span style="color
Step1: Read input tables
Step2: Block tables to get candidate set
Step3: Debug blocking output
Step4: Match tuple pairs in candidate set
1. Sample candidate set --> S
2. Label S
3. Split S into development set (I) and evaluation set (J)
4. Select best learning-based matcher X, using I
5. Compute accuracy of X on J
Step5: Selecting the best learning-based matcher using I
1. Create a set of ML-matchers
2. Generate a set of features (F)
3. Convert I into a set of feature vectors (H) using F
4. Select best learning-based matcher (X) using k-fold cross validation over H
5. Debug X (and repeat the above steps)
Step7: Debug X (Random Forest)
Step8: Compute accuracy of X (Decision Tree) on J
1. Train X using H
2. Convert J into a set of feature vectors (L)
3. Predict on L using X
4. Evaluate the predictions | Python Code:
import py_entitymatching as em
import profiler
import pandas as pd
Explanation: |S.No| Package | Comments |
|---|---|---|
|1| pandas | provides data structures (such as DataFrame) to <span style="color:red;">store and manage relational data</span>. Specifically, DataFrame is used to represent input tables. |
|2| scikit-learn | provides implementations for common <span style="color:red;">machine learning</span> algorithms. Specifically, this is used in ML-based matchers. |
|3| joblib | provides <span style="color:red;">multiprocessing</span> capabilities. Specifically, this is used to parallelize blockers across multiple processors.|
|4| PyQt4 | provides tools to <span style="color:red;">build GUI</span>. Specifically, this is used to build GUI for labeling data and debugging matchers.|
|5| py_stringsimjoin | provides <span style="color:red;">scalable</span> implementations for <span style="color:red;">string similarity joins</span> over two tables. Specifically, this is used to scale blockers. |
|6| py_stringmatching | provides a comprehensive set of <span style="color:red;"> tokenizers and string similarity functions</span>. Specifically, this is to create features for blocking and matching.|
|7| cloudpickle |provides functions to <span style="color:red;"> serialize Python constructs</span>. Specifically, this is used to load/save objects from/to disk. |
|8| pyprind | library to display <span style="color:red;"> progress indicators</span>. Specifically, this is used to display progress of blocking functions, matching functions, etc. |
|9| pyparsing | library to <span style="color:red;">parse strings</span>. Specifically, this is used to parse rules/features that are declaratively written by the user. |
|10| six | provides functions to <span style="color:red;">write compatible code across Python 2 and 3</span>. |
End of explanation
## Read input tables
A = em.read_csv_metadata('dblp_demo.csv', key='id')
B = em.read_csv_metadata('acm_demo.csv', key='id')
len(A), len(B), len(A) * len(B)
A.head(2)
B.head(2)
# If the tables are large we can downsample the tables like this
A1, B1 = em.down_sample(A, B, 500, 1, show_progress=False)
len(A1), len(B1)
# But for the demo, we will use the entire table A and B
Explanation: Read input tables
End of explanation
profiler.profile_table(A, 'paper year')
profiler.profile_table(B, 'paper year')
B.replace({'paper year':{
20003:2003
}}, inplace=True)
### Blocking plan
### A, B -- AttrEquivalence blocker [year]--------------------------| Candidate set
# Create attribute equivalence blocker
ab = em.AttrEquivalenceBlocker()
# Block tables using 'year' attribute : same year include in candidate set
C1 = ab.block_tables(A, B, 'paper year', 'paper year',
l_output_attrs=['title', 'authors', 'paper year'],
r_output_attrs=['title', 'authors', 'paper year']
)
len(C1)
C1.head(2)
Explanation: Block tables to get candidate set
End of explanation
# check whether the current blocking method has dropped a lot of potential matches
dbg = em.debug_blocker(C1, A, B)
dbg.head()
# em.view_table(dbg)
# Revised blocking plan
# A, B -- AttrEquivalence blocker [year] --------------------|
# |---> candidate set
# A, B -- Overlap blocker [title]---------------------------|
profiler.profile_table(A, 'title', plot=False)
profiler.profile_table(B, 'title', plot=False)
# Initialize overlap blocker
ob = em.OverlapBlocker()
# Block over title attribute
C2 = ob.block_tables(A, B, 'title', 'title', show_progress=False, overlap_size=1)
len(C2)
# Combine the outputs from attr. equivalence blocker and overlap blocker
C = em.combine_blocker_outputs_via_union([C1, C2])
len(C)
# Check again to see if we are dropping any potential matches
dbg = em.debug_blocker(C, A, B)
dbg.head()
Explanation: Debug blocking output
End of explanation
# Sample candidate set
S = em.sample_table(C, 450)
# Label S
S = em.label_table(S, 'label')
# Load the pre-labeled data
S = em.read_csv_metadata('labeled_data_demo.csv',
key='_id',
ltable=A, rtable=B,
fk_ltable='ltable_id', fk_rtable='rtable_id')
len(S)
# Split S into I an J
IJ = em.split_train_test(S, train_proportion=0.5, random_state=0)
I = IJ['train']
J = IJ['test']
Explanation: Match tuple pairs in candidate set
1. Sample candidate set --> S
2. Label S
3. Split S into development set (I) and evaluation set (J)
4. Select best learning-based matcher X, using I
5. Compute accuracy of X on J
End of explanation
# Create a set of ML-matchers
dt = em.DTMatcher(name='DecisionTree', random_state=0)
svm = em.SVMMatcher(name='SVM', random_state=0)
rf = em.RFMatcher(name='RF', random_state=0)
lg = em.LogRegMatcher(name='LogReg', random_state=0)
ln = em.LinRegMatcher(name='LinReg')
# Generate a set of features
F = em.get_features_for_matching(A, B)
# List the feature names generated
F['feature_name']
# Convert the I into a set of feature vectors using F
H = em.extract_feature_vecs(I,
feature_table=F,
attrs_after='label',
show_progress=False)
# Select the best ML matcher using CV
result = em.select_matcher([dt, rf, svm, ln, lg], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
k=5,
target_attr='label', metric='f1', random_state=0)
result['cv_stats']
Explanation: Selecting the best learning-based matcher using I
1. Create a set of ML-matchers
2. Generate a set of features (F)
3. Convert I into a set of feature vectors (H) using F
4. Select best learning-based matcher (X) using k-fold cross validation over H
5. Debug X (and repeat the above steps)
End of explanation
# Split H into P and Q
PQ = em.split_train_test(H, train_proportion=0.5, random_state=0)
P = PQ['train']
Q = PQ['test']
# Debug RF matcher using GUI
em.vis_debug_rf(rf, P, Q,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='label')
# Add a feature to do Jaccard on title + authors and add it to F
# Create a feature declaratively
sim = em.get_sim_funs_for_matching()
tok = em.get_tokenizers_for_matching()
feature_string = jaccard(wspace((ltuple['title'] + ' ' + ltuple['authors']).lower()),
wspace((rtuple['title'] + ' ' + rtuple['authors']).lower()))
feature = em.get_feature_fn(feature_string, sim, tok)
# Add feature to F
em.add_feature(F, 'jac_ws_title_authors', feature)
# Print supported sim. functions
pd.DataFrame({'simfunctions':sorted(sim.keys())})
# Print supported tokenizers
pd.DataFrame({'tokenizers':sorted(tok.keys())})
F['feature_name']
# Convert I into feature vectors using updated F
H = em.extract_feature_vecs(I,
feature_table=F,
attrs_after='label',
show_progress=False)
# Check whether the updated F improves X (Random Forest)
result = em.select_matcher([rf], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
k=5,
target_attr='label', metric='f1', random_state=0)
result['cv_stats']
# Select the best matcher again using CV
result = em.select_matcher([dt, rf, svm, ln, lg], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
k=5,
target_attr='label', metric='f1', random_state=0)
result['cv_stats']
Explanation: Debug X (Random Forest)
End of explanation
# Train using feature vectors from I
dt.fit(table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='label')
# Convert J into a set of feature vectors using F
L = em.extract_feature_vecs(J, feature_table=F,
attrs_after='label', show_progress=False)
# Predict on L
predictions = dt.predict(table=L, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
append=True, target_attr='predicted', inplace=False)
# Evaluate the predictions
eval_result = em.eval_matches(predictions, 'label', 'predicted')
em.print_eval_summary(eval_result)
Explanation: Compute accuracy of X (Decision Tree) on J
1. Train X using H
2. Convert J into a set of feature vectors (L)
3. Predict on L using X
4. Evaluate the predictions
End of explanation |
8,341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Equation of motion - SDE to be solved
$\ddot{q}(t) + \Gamma_0\dot{q}(t) + \Omega_0^2 q(t) - \dfrac{1}{m} F(t) = 0 $
where q = x, y or z
Where $F(t) = \mathcal{F}{fluct}(t) + F{feedback}(t)$
Taken from page 46 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler
Using $\mathcal{F}_{fluct}(t) = \sqrt{2m \Gamma_0 k_B T_0}\dfrac{dW(t)}{dt}$
and $F_{feedback}(t) = \Omega_0 \eta q^2 \dot{q}$
Taken from page 49 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler
we get the following SDE
Step1: Using values obtained from fitting to data from a real particle we set the following constant values describing the system. Cooling has been assumed to be off by setting $\eta = 0$.
Step2: partition the interval [0, T] into N equal subintervals of width $\Delta t>0$
Step3: set $Y_{0}=x_{0}$
Step4: Generate independent and identically distributed normal random variables with expected value 0 and variance dt
Step5: Apply Milstein's method (Euler Maruyama if $b'(Y_{n}) = 0$ as is the case here)
Step6: We now have an array of positions, $v$, and velocities $p$ with time $t$.
Step7: Alternatively we can use a derivative-free version of Milsteins method as a two-stage kind-of Runge-Kutta method, documented in wikipedia (https
Step8: The form of $F_{feedback}(t)$ is still questionable
On page 49 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler he uses the form
Step9: values below are taken from a ~1e-2 mbar cooled save | Python Code:
def a_q(t, v, q):
return v
def a_v(t, v, q):
return -(Gamma0 - Omega0*eta*q**2)*v - Omega0**2*q
def b_v(t, v, q):
return np.sqrt(2*Gamma0*k_b*T_0/m)
Explanation: Equation of motion - SDE to be solved
$\ddot{q}(t) + \Gamma_0\dot{q}(t) + \Omega_0^2 q(t) - \dfrac{1}{m} F(t) = 0 $
where q = x, y or z
Where $F(t) = \mathcal{F}{fluct}(t) + F{feedback}(t)$
Taken from page 46 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler
Using $\mathcal{F}_{fluct}(t) = \sqrt{2m \Gamma_0 k_B T_0}\dfrac{dW(t)}{dt}$
and $F_{feedback}(t) = \Omega_0 \eta q^2 \dot{q}$
Taken from page 49 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler
we get the following SDE:
$\dfrac{d^2q(t)}{dt^2} + (\Gamma_0 - \Omega_0 \eta q(t)^2)\dfrac{dq(t)}{dt} + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$
split into 2 first order ODE/SDE s
letting $v = \dfrac{dq}{dt}$
$\dfrac{dv(t)}{dt} + (\Gamma_0 - \Omega_0 \eta q(t)^2)v + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$
therefore
$\dfrac{dv(t)}{dt} = -(\Gamma_0 - \Omega_0 \eta q(t)^2)v - \Omega_0^2 q(t) + \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} $
$v = \dfrac{dq}{dt}$ therefore $dq = v~dt$
\begin{align}
dq&=v\,dt\
dv&=[-(\Gamma_0-\Omega_0 \eta q(t)^2)v(t) - \Omega_0^2 q(t)]\,dt + \sqrt{\frac{2\Gamma_0 k_B T_0}m}\,dW
\end{align}
Apply Milstein Method to solve
Consider the autonomous Itō stochastic differential equation
${\mathrm {d}}X_{t}=a(X_{t})\,{\mathrm {d}}t+b(X_{t})\,{\mathrm {d}}W_{t}$
Taking $X_t = q_t$ for the 1st equation above (i.e. $dq = v~dt$) we get:
$$ a(q_t) = v $$
$$ b(q_t) = 0 $$
Taking $X_t = v_t$ for the 2nd equation above (i.e. $dv = ...$) we get:
$$a(v_t) = -(\Gamma_0-\Omega_0\eta q(t)^2)v - \Omega_0^2 q(t)$$
$$b(v_t) = \sqrt{\dfrac{2\Gamma_0 k_B T_0}m}$$
${\displaystyle b'(v_{t})=0}$ therefore the diffusion term does not depend on ${\displaystyle v_{t}}$ , the Milstein's method in this case is therefore equivalent to the Euler–Maruyama method.
We then construct these functions in python:
End of explanation
Gamma0 = 4000 # radians/second
Omega0 = 75e3*2*np.pi # radians/second
eta = 0.5e7
T_0 = 300 # K
k_b = scipy.constants.Boltzmann # J/K
m = 3.1e-19 # KG
Explanation: Using values obtained from fitting to data from a real particle we set the following constant values describing the system. Cooling has been assumed to be off by setting $\eta = 0$.
End of explanation
dt = 1e-10
tArray = np.arange(0, 100e-6, dt)
print("{} Hz".format(1/dt))
Explanation: partition the interval [0, T] into N equal subintervals of width $\Delta t>0$:
$ 0=\tau {0}<\tau {1}<\dots <\tau {N}=T{\text{ with }}\tau {n}:=n\Delta t{\text{ and }}\Delta t={\frac {T}{N}}$
End of explanation
q0 = 0
v0 = 0
q = np.zeros_like(tArray)
v = np.zeros_like(tArray)
q[0] = q0
v[0] = v0
Explanation: set $Y_{0}=x_{0}$
End of explanation
np.random.seed(88)
dwArray = np.random.normal(0, np.sqrt(dt), len(tArray)) # independent and identically distributed normal random variables with expected value 0 and variance dt
Explanation: Generate independent and identically distributed normal random variables with expected value 0 and variance dt
End of explanation
#%%timeit
for n, t in enumerate(tArray[:-1]):
dw = dwArray[n]
v[n+1] = v[n] + a_v(t, v[n], q[n])*dt + b_v(t, v[n], q[n])*dw + 0
q[n+1] = q[n] + a_q(t, v[n], q[n])*dt + 0
Explanation: Apply Milstein's method (Euler Maruyama if $b'(Y_{n}) = 0$ as is the case here):
recursively define $Y_{n}$ for $ 1\leq n\leq N $ by
$ Y_{{n+1}}=Y_{n}+a(Y_{n})\Delta t+b(Y_{n})\Delta W_{n}+{\frac {1}{2}}b(Y_{n})b'(Y_{n})\left((\Delta W_{n})^{2}-\Delta t\right)$
Perform this for the 2 first order differential equations:
End of explanation
plt.plot(tArray*1e6, v)
plt.xlabel("t (us)")
plt.ylabel("v")
plt.plot(tArray*1e6, q)
plt.xlabel("t (us)")
plt.ylabel("q")
Explanation: We now have an array of positions, $v$, and velocities $p$ with time $t$.
End of explanation
q0 = 0
v0 = 0
X = np.zeros([len(tArray), 2])
X[0, 0] = q0
X[0, 1] = v0
def a(t, X):
q, v = X
return np.array([v, -(Gamma0 - Omega0*eta*q**2)*v - Omega0**2*q])
def b(t, X):
q, v = X
return np.array([0, np.sqrt(2*Gamma0*k_b*T_0/m)])
%%timeit
S = np.array([-1,1])
for n, t in enumerate(tArray[:-1]):
dw = dwArray[n]
K1 = a(t, X[n])*dt + b(t, X[n])*(dw - S*np.sqrt(dt))
Xh = X[n] + K1
K2 = a(t, Xh)*dt + b(t, Xh)*(dw + S*np.sqrt(dt))
X[n+1] = X[n] + 0.5 * (K1+K2)
q = X[:, 0]
v = X[:, 1]
plt.plot(tArray*1e6, v)
plt.xlabel("t (us)")
plt.ylabel("v")
plt.plot(tArray*1e6, q)
plt.xlabel("t (us)")
plt.ylabel("q")
Explanation: Alternatively we can use a derivative-free version of Milsteins method as a two-stage kind-of Runge-Kutta method, documented in wikipedia (https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_method_%28SDE%29) or the original in arxiv.org https://arxiv.org/pdf/1210.0933.pdf.
End of explanation
def a_q(t, v, q):
return v
def a_v(t, v, q):
return -(Gamma0 + deltaGamma)*v - Omega0**2*q
def b_v(t, v, q):
return np.sqrt(2*Gamma0*k_b*T_0/m)
Explanation: The form of $F_{feedback}(t)$ is still questionable
On page 49 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler he uses the form: $F_{feedback}(t) = \Omega_0 \eta q^2 \dot{q}$
On page 2 of 'Parametric feeedback cooling of levitated optomechancs in a parabolic mirror trap' Paper by Jamie and Muddassar they use the form: $F_{feedback}(t) = \dfrac{\Omega_0 \eta q^2 \dot{q}}{q_0^2}$ where $q_0$ is the amplitude of the motion: $q(t) = q_0(sin(\omega_0t)$
However it always shows up as a term $\delta \Gamma$ like so:
$\dfrac{d^2q(t)}{dt^2} + (\Gamma_0 + \delta \Gamma)\dfrac{dq(t)}{dt} + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$
By fitting to data we extract the following 3 parameters:
1) $A = \gamma^2 \dfrac{k_B T_0}{\pi m}\Gamma_0 $
Where:
$\gamma$ is the conversion factor between Volts and nanometres. This parameterises the amount of light/ number of photons collected from the nanoparticle. With unchanged allignment and the same particle this should remain constant with changes in pressure.
$m$ is the mass of the particle, a constant
$T_0$ is the temperature of the environment
$\Gamma_0$ the damping due to the environment only
2) $\Omega_0$ - the natural frequency at this trapping power
3) $\Gamma$ - the total damping on the system including environment and feedback etc...
By taking a reference save with no cooling we have $\Gamma = \Gamma_0$ and therefore we can extract $A' = \gamma^2 \dfrac{k_B T_0}{\pi m}$. Since $A'$ should be constant with pressure we can therefore extract $\Gamma_0$ at any pressure (if we have a reference save and therefore a value of $A'$) and therefore can extract $\delta \Gamma$, the damping due to cooling, we can then plug this into our SDE instead in order to include cooling in the SDE model.
For any dataset at any pressure we can do:
$\Gamma_0 = \dfrac{A}{A'}$
And then $\delta \Gamma = \Gamma - \Gamma_0$
Using this form and the same derivation as above we arrive at the following form of the 2 1st order differential equations:
\begin{align}
dq&=v\,dt\
dv&=[-(\Gamma_0 + \delta \Gamma)v(t) - \Omega_0^2 v(t)]\,dt + \sqrt{\frac{2\Gamma_0 k_B T_0}m}\,dW
\end{align}
End of explanation
Gamma0 = 15 # radians/second
deltaGamma = 2200
Omega0 = 75e3*2*np.pi # radians/second
eta = 0.5e7
T_0 = 300 # K
k_b = scipy.constants.Boltzmann # J/K
m = 3.1e-19 # KG
dt = 1e-10
tArray = np.arange(0, 100e-6, dt)
q0 = 0
v0 = 0
q = np.zeros_like(tArray)
v = np.zeros_like(tArray)
q[0] = q0
v[0] = v0
np.random.seed(88)
dwArray = np.random.normal(0, np.sqrt(dt), len(tArray)) # independent and identically distributed normal random variables with expected value 0 and variance dt
for n, t in enumerate(tArray[:-1]):
dw = dwArray[n]
v[n+1] = v[n] + a_v(t, v[n], q[n])*dt + b_v(t, v[n], q[n])*dw + 0
q[n+1] = q[n] + a_q(t, v[n], q[n])*dt + 0
plt.plot(tArray*1e6, v)
plt.xlabel("t (us)")
plt.ylabel("v")
plt.plot(tArray*1e6, q)
plt.xlabel("t (us)")
plt.ylabel("q")
Explanation: values below are taken from a ~1e-2 mbar cooled save
End of explanation |
8,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Use logical constraints with decision optimization
This tutorial includes everything you need to set up decision optimization engines, build a mathematical programming model, leveraging logical constraints.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>
Step1: A restart of the kernel might be needed.
Step 2
Step2: The truth value of a constraint not added to a model is free
A constraint that is not added to a model, has no effect. Its truth value is free
Step3: Using constraint truth values in modeling
We have learned about the truth value variable of linear constraints, but there's more.
Linear constraints can be freely used in expressions
Step4: Constraint truth values can be used with arithmetic operators, just as variables can. In the next model, we express a (slightly) more complex constraint
Step5: As we have seen, constraints can be used in expressions. This includes the Model.sum() and Model.dot() aggregation methods.
In the next model, we define ten variables, one of which must be equal to 3 (we dpn't care which one, for now). As we maximize the sum of all xs variables, all will end up equal to their upper bound, except for one.
Step6: As we can see, all variables but one are set to their upper bound of 100. We cannot predict which variable will be set to 3.
However, let's imagine that we prefer variable with a lower index to be set to 3, how can we express this preference?
The answer is to use an additional expression to the objective, using a scalar product of constraint truth value
Step7: As expected, the x variable set to 3 now is the first one.
Using truth values to negate a constraint
Truth values can be used to negate a complex constraint, by forcing its truth value to be equal to 0.
In the next model, we illustrate how an equality constraint can be negated by forcing its truth value to zero. This negation forbids y to be equal to 4, as it would be without this negation.
Finally, the objective is 7 instead of 8.
Step8: Summary
We have seen that linear constraints have an associated binary variable, its truth value, whose value is linked to whether or not the constraint is satisfied.
second, linear constraints can be freely mixed with variables in expression to express meta-constraints that is, constraints
about constraints. As an example, we have shown how to use truth values to negate constraints.
Note
Step9: Step 3
Step10: Step 4
Step11: Step 5
Step12: In this second variant, the objective coefficient for (y+z) is 2 instead of 100, so x domines the objective, and reache sits upper bound, while (y+z) must be less than 9, which is what we observe. | Python Code:
import sys
try:
import docplex.mp
except:
raise Exception('Please install docplex. See https://pypi.org/project/docplex/')
Explanation: Use logical constraints with decision optimization
This tutorial includes everything you need to set up decision optimization engines, build a mathematical programming model, leveraging logical constraints.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
- <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
- <i>Python 3.x</i> runtime: Community edition
- <i>Python 3.x + DO</i> runtime: full edition
- <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition
Table of contents:
Describe the business problem
How decision optimization (prescriptive analytics) can help
Use decision optimization
Step 1: Import the library
Step 2: Learn about constraint truth values
Step 3: Learn about equivalence constraints
Summary
Logical constraints let you use the truth value of constraints inside the model. The truth value of a constraint
is a binary variable equal to 1 when the constraint is satisfied, and equal to 0 when not. Adding a constraint to a model ensures that it is always satisfied.
With logical constraints, one can use the truth value of a constraint inside the model, allowing to choose dynamically whether a constraint is to be satisfied (or not).
How decision optimization can help
Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
<u>With prescriptive analytics, you can:</u>
Automate the complex decisions and trade-offs to better manage your limited resources.
Take advantage of a future opportunity or mitigate a future risk.
Proactively update recommendations based on changing events.
Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
Use decision optimization
Step 1: Import the library
Run the following code to import Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
End of explanation
from docplex.mp.model import Model
m1 = Model()
x = m1.integer_var(name='ix')
y = m1.integer_var(name='iy')
ct = m1.add(x + y <= 3)
# acces the truth value of a linear constraint
ct_truth = ct.status_var
m1.maximize(x+y)
assert m1.solve()
print('the truth value of [{0!s}] is {1}'.format(ct, ct_truth.solution_value))
Explanation: A restart of the kernel might be needed.
Step 2: Learn about constraint truth values
Any discrete linear constraint can be associated to a binary variable that holds the truth value of the constraint.
But first, let's explain what a discrete constraint is
Discrete linear constraint
A discrete linear constraint is built from discrete coefficients and discrete variables, that is variables with type integer or binary.
For example, assuming x and y are integer variables:
2x+3y == 1 is discrete
x+y = 3.14 is not (because of 3.14)
1.1 x + 2.2 y <= 3 is not because of the non-integer coefficients 1.1 and 2.2
The truth value of an added constraint is always 1
The truth value of a linear constraint is accessed by the status_var property. This property returns a binary which can be used anywhere a variable can. However, the value of the truth value variable and the constraint are linked, both ways:
a constraint is satisfied if and only if its truth value variable equals 1
a constraint is not satisfied if and only if its truth value variable equals 0.
In the following small model, we show that the truth value of a constraint which has been added to a model is always equal to 1.
End of explanation
m2 = Model(name='logical2')
x = m2.integer_var(name='ix', ub=4)
y = m2.integer_var(name='iy', ub=4)
ct = (x + y <= 3)
ct_truth = ct.status_var # not m2.add() here!
m2.maximize(x+y)
assert m2.solve()
m2.print_solution()
print('the truth value of [{0!s}] is {1}'.format(ct, ct_truth.solution_value))
Explanation: The truth value of a constraint not added to a model is free
A constraint that is not added to a model, has no effect. Its truth value is free: it can be either 1 or 0.
In the following example, both x and y are set to their upper bound, so that the constraint is not satisfied; hence the truth value is 0.
End of explanation
m3 = Model(name='logical3')
x = m3.integer_var(name='ix', ub=4)
y = m3.integer_var(name='iy', ub=4)
ct_x2 = (x == 2)
ct_y4 = (y == 4)
# use constraints in comparison
m3.add( ct_y4 <= ct_x2 )
m3.maximize(y)
assert m3.solve()
# expected solution x==2, and y==4.
m3.print_solution()
Explanation: Using constraint truth values in modeling
We have learned about the truth value variable of linear constraints, but there's more.
Linear constraints can be freely used in expressions: Docplex will then substitute the constraint's truth value
variable in the expression.
Let's experiment again with a toy model: in this model,
we want to express that when x ==3 is false, then y ==4 must also be false.
To express this, it suffices to say that the truth value of y == 4 is less than or equal
to the truth value of x ==3. When x==3 is false, is truthe value is 0, hence the truth value of y==4 is also zero, and y cannot be equal to 4.
However, as shown in the model below, it is not necessary to use the status_var propert: using
the constraints in a comparison expression works fine.
As we maximize y, y has value 4 in the optimal solution (it is the upper bound), and consequently the constraint ct_y4 is satisfied. From the inequality between truth values,
it follows that the truth value of ct_x2 equals 1 and x is equal to 2.
Using the constraints in the inequality has silently converted each constraint into its truth value.
End of explanation
m31 = Model(name='logical31')
x = m31.integer_var(name='ix', ub=4)
y = m31.integer_var(name='iy', ub=10)
z = m31.integer_var(name='iz', ub=10)
ct_x2 = (x == 3)
ct_y5 = (y == 5)
ct_z5 = (z == 5)
#either ct_x2 is true or -both- ct_y5 and ct_z5 must be true
m31.add( 2 * ct_x2 + (ct_y5 + ct_z5) == 2)
# force x to be less than 2: it cannot be equal to 3!
m31.add(x <= 2)
# maximize sum of x,y,z
m31.maximize(x+y+z)
assert m31.solve()
# the expected solution is: x=2, y=5, z=5
assert m31.objective_value == 12
m31.print_solution()
Explanation: Constraint truth values can be used with arithmetic operators, just as variables can. In the next model, we express a (slightly) more complex constraint:
either x is equal to 3, or both y and z are equal to 5
Let's see how we can express this easily with truth values:
End of explanation
m4 = Model(name='logical4')
xs = m4.integer_var_list(10, ub=100)
cts = [xi==3 for xi in xs]
m4.add( m4.sum(cts) == 1)
m4.maximize(m4.sum(xs))
assert m4.solve()
m4.print_solution()
Explanation: As we have seen, constraints can be used in expressions. This includes the Model.sum() and Model.dot() aggregation methods.
In the next model, we define ten variables, one of which must be equal to 3 (we dpn't care which one, for now). As we maximize the sum of all xs variables, all will end up equal to their upper bound, except for one.
End of explanation
preference = m4.dot(cts, (k+1 for k in range(len(xs))))
# we prefer lower indices for satisfying the x==3 constraint
# so the final objective is a maximize of sum of xs -minus- the preference
m4.maximize(m4.sum(xs) - preference)
assert m4.solve()
m4.print_solution()
Explanation: As we can see, all variables but one are set to their upper bound of 100. We cannot predict which variable will be set to 3.
However, let's imagine that we prefer variable with a lower index to be set to 3, how can we express this preference?
The answer is to use an additional expression to the objective, using a scalar product of constraint truth value
End of explanation
m5 = Model(name='logical5')
x = m5.integer_var(name='ix', ub=4)
y = m5.integer_var(name='iy', ub=4)
# this is the equality constraint we want to negate
ct_xy7 = (y + x >= 7)
# forcing truth value to zero means the constraint is not satisfied.
# note how we use a constraint in an expression
negation = m5.add( ct_xy7 == 0)
# maximize x+y should yield both variables to 4, but x+y cannot be greater than 7
m5.maximize(x + y)
assert m5.solve()
m5.print_solution()
# expecting 6 as objective, not 8
assert m5.objective_value == 6
# now remove the negation
m5.remove_constraint(negation)
# and solve again
assert m5.solve()
# the objective is 8 as expected: both x and y are equal to 4
assert m5.objective_value == 8
m5.print_solution()
Explanation: As expected, the x variable set to 3 now is the first one.
Using truth values to negate a constraint
Truth values can be used to negate a complex constraint, by forcing its truth value to be equal to 0.
In the next model, we illustrate how an equality constraint can be negated by forcing its truth value to zero. This negation forbids y to be equal to 4, as it would be without this negation.
Finally, the objective is 7 instead of 8.
End of explanation
m6 = Model(name='logical6')
x = m6.integer_var(name='ix', ub=4)
y = m6.integer_var(name='iy', ub=4)
# this is the equality constraint we want to negate
m6.add(x +1 <= y)
m6.add(x != 3)
m6.add(y != 4)
# forcing truth value to zero means the constraint is not satisfied.
# note how we use a constraint in an expression
m6.add(x+y <= 7)
# maximize x+y should yield both variables to 4,
# but here: x < y, y cannot be 4 thus x cannot be 3 either so we get x=2, y=3
m6.maximize(x + y)
assert m6.solve()
m6.print_solution()
# expecting 5 as objective, not 8
assert m6.objective_value == 5
Explanation: Summary
We have seen that linear constraints have an associated binary variable, its truth value, whose value is linked to whether or not the constraint is satisfied.
second, linear constraints can be freely mixed with variables in expression to express meta-constraints that is, constraints
about constraints. As an example, we have shown how to use truth values to negate constraints.
Note: the != (not_equals) operator
Since version 2.9, Docplex provides a 'not_equal' operator, between discrete expressions. Of course, this is implemented using truth values, but the operator provides a convenient way to express this constraint.
End of explanation
m7 = Model(name='logical7')
size = 7
il = m7.integer_var_list(size, name='i', ub=10)
jl = m7.integer_var_list(size, name='j', ub=10)
bl = m7.binary_var_list(size, name='b')
for k in range(size):
# for each i, relate bl_k to il_k==5 *and* jl_k == 7
m7.add_equivalence(bl[k], il[k] == 5)
m7.add_equivalence(bl[k], jl[k] == 7)
# now maximize sum of bs
m7.maximize(m7.sum(bl))
assert m7.solve()
m7.print_solution()
Explanation: Step 3: Learn about equivalence constraints
As we have seen, using a constraint in expressions automtically generates a truth value variable, whose value is linked to the status of the constraint.
However, in some cases, it can be useful to relate the status of a constraint to an existing binary variable. This is the purpose of equivalence constraints.
An equivalence constraint relates an existing binary variable to the status of a discrete linear constraints, in both directions. The syntax is:
`Model.add_equivalence(bvar, linear_ct, active_value, name)`
bvar is the existing binary variable
linear-ct is a discrete linear constraint
active_value can take values 1 or 0 (the default is 1)
name is an optional string to name the equivalence.
If the binary variable bvar equals 1, then the constraint is satisfied. Conversely, if the constraint is satisfied, the binary variable is set to 1.
End of explanation
m8 = Model(name='logical8')
x = m8.continuous_var(name='x', ub=100)
b = m8.binary_var(name='b')
m8.maximize(100*b +x)
assert m8.solve()
assert m8.objective_value == 200
m8.print_solution()
ind_pi = m8.add_indicator(b, x <= 3.14)
assert m8.solve()
assert m8.objective_value <= 104
m8.print_solution()
Explanation: Step 4: Learn about indicator constraints
The equivalence constraint decsribed in the previous section links the value of an existing binary variable to the satisfaction of a linear constraint. In certain cases, it is sufficient to link from an existing binary variable to the constraint, but not the other way. This is what indicator constraints do.
The syntax is very similar to equivalence:
`Model.add_indicator(bvar, linear_ct, active_value=1, name=None)`
bvar is the existing binary variable
linear-ct is a discrete linear constraint
active_value can take values 1 or 0 (the default is 1)
name is an optional string to name the indicator.
The indicator constraint works as follows: if the binary variable is set to 1, the constraint is satified; if the binary variable is set to 0, anything can happen.
One noteworty difference between indicators and equivalences is that, for indicators, the linear constraint need not be discrete.
In the following small model, we first solve without the indicator: both b and x are set to their upper bound, and the final objective is 200.
Then we add an indicator sttaing that when b equals1, then x must be less than 3.14; the resulting objective is 103.14, as b is set to 1, which trigger the x <= 31.4 constraint.
Note that the right-hand side constraint is not discrete (because of 3.14).
End of explanation
m9 = Model(name='logical9')
x = m9.continuous_var(name='x', ub=100)
y = m9.integer_var(name='iy', ub = 11)
z = m9.integer_var(name='iz', ub = 13)
m9.add_if_then(y+z >= 10, x <= 3.14)
# y and z are puashed to their ub, so x is down to 3.14
m9.maximize(x + 100*(y + z))
m9.solve()
m9.print_solution()
Explanation: Step 5: Learn about if-then
In this section we explore the Model.add_if_then construct which links the truth value of two constraints:
Model.add_if_then(if_ct, then_ct) ensures that, when constraint if_ct is satisfied, then then_ct is also satisfied.
When if_ct is not satisfied, then_ct is free to be satsfied or not.
The syntax is:
`Model.add_if_then(if_ct, then_ct, negate=False)`
if_ct is a discrete linear constraint
then_ct is any linear constraint (not necessarily discrete),
negate is an optional flag to reverse the logic, that is satisfy then_ct if if_ct is not (more on this later)
As for indicators, the then_ct need not be discrete.
Model.add_if_then(if_ct, then_ct) is roughly equivalent to Model.add_indicator(if_ct.status_var, then_ct).
End of explanation
# y and z are pushed to their ub, so x is down to 3.14
m9.maximize(x + 2 *(y + z))
m9.solve()
m9.print_solution()
assert abs(m9.objective_value - 118) <= 1e-2
Explanation: In this second variant, the objective coefficient for (y+z) is 2 instead of 100, so x domines the objective, and reache sits upper bound, while (y+z) must be less than 9, which is what we observe.
End of explanation |
8,343 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Let X be a M x N matrix. Denote xi the i-th column of X. I want to create a 3 dimensional N x M x M array consisting of M x M matrices xi.dot(xi.T). | Problem:
import numpy as np
X = np.random.randint(2, 10, (5, 6))
result = X.T[:, :, None] * X.T[:, None] |
8,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Start here to begin with Stingray.
Step1: Creating a light curve
Step2: A Lightcurve object can be created in two ways
Step3: Create 1000 random Poisson-distributed counts
Step4: Create a Lightcurve object with the times and counts array.
Step5: The number of data points can be counted with the len function.
Step6: Note the warnings thrown by the syntax above. By default, stingray does a number of checks on the data that is put into the Lightcurve class. For example, it checks whether it's evenly sampled. It also computes the time resolution dt. All of these checks take time. If you know the time resolution, it's a good idea to put it in manually. If you know that your light curve is well-behaved (for example, because you know the data really well, or because you've generated it yourself, as we've done above), you can skip those checks and save a bit of time
Step7: 2. Photon Arrival Times
Often, you might have unbinned photon arrival times, rather than a light curve with time stamps and associated measurements. If this is the case, you can use the make_lightcurve method to turn these photon arrival times into a regularly binned light curve.
Step8: The time bins and respective counts can be seen with lc.counts and lc.time
Step9: One useful feature is that you can explicitly pass in the start time and the duration of the observation. This can be helpful because the chance that a photon will arrive exactly at the start of the observation and the end of the observation is very small. In practice, when making multiple light curves from the same observation (e.g. individual light curves of multiple detectors, of for different energy ranges) this can lead to the creation of light curves with time bins that are slightly offset from one another. Here, passing in the total duration of the observation and the start time can be helpful.
Step10: Properties
A Lightcurve object has the following properties
Step11: Note that by default, stingray assumes that the user is passing a light curve in counts per bin. That is, the counts in bin $i$ will be the number of photons that arrived in the interval $t_i - 0.5\Delta t$ and $t_i + 0.5\Delta t$. Sometimes, data is given in count rate, i.e. the number of events that arrive within an interval of a second. The two will only be the same if the time resolution of the light curve is exactly 1 second.
Whether the input data is in counts per bin or in count rate can be toggled via the boolean input_counts keyword argument. By default, this argument is set to True, and the code assumes the light curve passed into the object is in counts/bin. By setting it to False, the user can pass in count rates
Step12: Internally, both counts and countrate attribute will be defined no matter what the user passes in, since they're trivially converted between each other through a multiplication/division with `dt
Step13: Error Distributions in stingray.Lightcurve
The instruments that record our data impose measurement noise on our measurements. Depending on the type of instrument, the statistical distribution of that noise can be different. stingray was originally developed with X-ray data in mind, where most data comes in the form of photon arrival times, which generate measurements distributed according to a Poisson distribution. By default, err_dist is assumed to Poisson, and this is the only statistical distribution currently fully supported. But you can put in your own errors (via counts_err or countrate_err). It'll produce a warning, and be aware that some of the statistical assumptions made about downstream products (e.g. the normalization of periodograms) may not be correct
Step14: Good Time Intervals
Lightcurve (and most other core stingray classes) support the use of Good Time Intervals (or GTIs), which denote the parts of an observation that are reliable for scientific purposes. Often, GTIs introduce gaps (e.g. where the instrument was off, or affected by solar flares). By default. GTIs are passed and don't apply to the data within a Lightcurve object, but become relevant in a number of circumstances, such as when generating Powerspectrum objects.
If no GTIs are given at instantiation of the Lightcurve class, an artificial GTI will be created spanning the entire length of the data set being passed in
Step15: We'll get back to these when we talk more about some of the methods that apply GTIs to the data.
Operations
Addition/Subtraction
Two light curves can be summed up or subtracted from each other if they have same time arrays.
Step16: Negation
A negation operation on the lightcurve object inverts the count array from positive to negative values.
Step17: Indexing
Count value at a particular time can be obtained using indexing.
Step18: A Lightcurve can also be sliced to generate a new object.
Step19: Methods
Concatenation
Two light curves can be combined into a single object using the join method. Note that both of them must not have overlapping time arrays.
Step20: Truncation
A light curve can also be truncated.
Step21: Note
Step22: Re-binning
The time resolution (dt) can also be changed to a larger value.
Note
Step23: Sorting
A lightcurve can be sorted using the sort method. This function sorts time array and the counts array is changed accordingly.
Step24: You can sort by the counts array using sort_counts method which changes time array accordingly
Step25: Plotting
A curve can be plotted with the plot method.
Step26: A plot can also be customized using several keyword arguments.
Step27: The figure drawn can also be saved in a file using keywords arguments in the plot method itself.
Step28: Note
Step29: Checking the Light Curve for Irregularities
You can perform checks on the behaviour of the light curve, similar to what's done when instantiating a Lightcurve object when skip_checks=False, by calling the relevant method
Step30: Let's add some badly formatted GTIs
Step31: MJDREF and Shifting Times
The mjdref keyword argument defines a reference time in Modified Julian Date. Often, X-ray missions count their internal time in seconds from a given reference date and time (so that numbers don't become arbitrarily large). The data is then in the format of Mission Elapsed Time (MET), or seconds since that reference time.
mjdref is generally passed into the Lightcurve object at instantiation, but it can be changed later
Step32: This change only affects the reference time, not the values given in the time attribute. However, it is also possible to shift the entire light curve, along with its GTIs
Step33: Calculating a baseline
TODO
Step34: This light curve has uneven bins. It has a large gap between 3 and 10, and a smaller gap between 14 and 17. We can use the split method to split it into three contiguous segments
Step35: This has split the light curve into three contiguous segments. You can adjust the tolerance for the size of gap that's acceptable via the min_gap attribute. You can also require a minimum number of data points in the output light curves. This is helpful when you're only interested in contiguous segments of a certain length
Step36: What if we only want the long segment?
Step37: A special case of splitting your light curve object is to split by GTIs. This can be helpful if you want to look at individual contiguous segments separately
Step38: Because I'd passed in GTIs that define the range from 0-8 and from 12-20 as good time intervals, the light curve will be split into two individual ones containing all data points falling within these ranges.
You can also apply the GTIs directly to the original light curve, which will filter time, counts, countrate, counts_err and countrate_err to only fall within the bounds of the GTIs
Step39: Caution
Step40: As you can see, the time bins 8-12 have been dropped, since they fall outside of the GTIs.
Analyzing Light Curve Segments
There's some functionality in stingray aimed at making analysis of individual light curve segments (or chunks, as they're called throughout the code) efficient.
One helpful function tells you the length that segments should have to satisfy two conditions
Step41: So we have time bins of 1 second time resolution, each with an average of 100 counts/bin. We require at least 2 time bins in each segment, and also a minimum number of total counts in the segment of 300. In theory, you'd expect to need 3 time bins (so 3-second segments) to satisfy the condition above. However, the Poisson distribution is quite variable, so we cannot guarantee that all bins will have a total number of counts above 300. Hence, our segments need to be 4 seconds long.
We can now use these segments to do some analysis, using the analyze_by_chunks method. In the simplest, case we can use a standard numpy operation to learn something about the properties of each segment
Step43: This splits the light curve into 10-second segments, and then finds the median number of counts/bin in each segment. For a flat light curve like the one we generated above, this isn't super interesting, but this method can be helpful for more complex analyses. Instead of np.median, you can also pass in your own function
Step44: Compatibility with Lightkurve
The Lightkurve package provides a large amount of complementary functionality to stingray, in particular for data observed with Kepler and TESS, stars and exoplanets, and unevenly sampled data. We have implemented a conversion method that converts to/from stingray's native Lightcurve object and Lightkurve's native LightCurve object. Equivalent functionality exists in Lightkurve, too.
Step45: Let's do the rountrip to stingray
Step46: Similarly, we can transform Lightcurve objects to and from astropy.TimeSeries objects | Python Code:
import numpy as np
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
Explanation: Start here to begin with Stingray.
End of explanation
from stingray import Lightcurve
Explanation: Creating a light curve
End of explanation
times = np.arange(1000)
times[:10]
Explanation: A Lightcurve object can be created in two ways :
From an array of time stamps and an array of counts.
From photon arrival times.
1. Array of time stamps and counts
Create 1000 time stamps
End of explanation
counts = np.random.poisson(100, size=len(times))
counts[:10]
Explanation: Create 1000 random Poisson-distributed counts:
End of explanation
lc = Lightcurve(times, counts)
Explanation: Create a Lightcurve object with the times and counts array.
End of explanation
len(lc)
Explanation: The number of data points can be counted with the len function.
End of explanation
dt = 1
lc = Lightcurve(times, counts, dt=dt, skip_checks=True)
Explanation: Note the warnings thrown by the syntax above. By default, stingray does a number of checks on the data that is put into the Lightcurve class. For example, it checks whether it's evenly sampled. It also computes the time resolution dt. All of these checks take time. If you know the time resolution, it's a good idea to put it in manually. If you know that your light curve is well-behaved (for example, because you know the data really well, or because you've generated it yourself, as we've done above), you can skip those checks and save a bit of time:
End of explanation
arrivals = np.loadtxt("photon_arrivals.txt")
arrivals[:10]
lc_new = Lightcurve.make_lightcurve(arrivals, dt=1)
Explanation: 2. Photon Arrival Times
Often, you might have unbinned photon arrival times, rather than a light curve with time stamps and associated measurements. If this is the case, you can use the make_lightcurve method to turn these photon arrival times into a regularly binned light curve.
End of explanation
lc_new.counts
lc_new.time
Explanation: The time bins and respective counts can be seen with lc.counts and lc.time
End of explanation
lc_new = Lightcurve.make_lightcurve(arrivals, dt=1.0, tstart=1.0, tseg=9.0)
Explanation: One useful feature is that you can explicitly pass in the start time and the duration of the observation. This can be helpful because the chance that a photon will arrive exactly at the start of the observation and the end of the observation is very small. In practice, when making multiple light curves from the same observation (e.g. individual light curves of multiple detectors, of for different energy ranges) this can lead to the creation of light curves with time bins that are slightly offset from one another. Here, passing in the total duration of the observation and the start time can be helpful.
End of explanation
lc.n == len(lc)
Explanation: Properties
A Lightcurve object has the following properties :
time : numpy array of time values
counts : numpy array of counts per bin values
counts_err: numpy array with the uncertainties on the values in counts
countrate : numpy array of counts per second
countrate_err: numpy array of the uncertainties on the values in countrate
n : Number of data points in the lightcurve
dt : Time resolution of the light curve
tseg : Total duration of the light curve
tstart : Start time of the light curve
meancounts: The mean counts of the light curve
meanrate: The mean count rate of the light curve
mjdref: MJD reference date (tstart / 86400 gives the date in MJD at the start of the observation)
gti:Good Time Intervals. They indicate the "safe" time intervals to be used during the analysis of the light curve.
err_dist: Statistic of the Lightcurve, it is used to calculate the uncertainties and other statistical values appropriately. It propagates to Spectrum classes
End of explanation
# times with a resolution of 0.1
dt = 0.1
times = np.arange(0, 100, dt)
times[:10]
mean_countrate = 100.0
countrate = np.random.poisson(mean_countrate, size=len(times))
lc = Lightcurve(times, counts=countrate, dt=dt, skip_checks=True, input_counts=False)
Explanation: Note that by default, stingray assumes that the user is passing a light curve in counts per bin. That is, the counts in bin $i$ will be the number of photons that arrived in the interval $t_i - 0.5\Delta t$ and $t_i + 0.5\Delta t$. Sometimes, data is given in count rate, i.e. the number of events that arrive within an interval of a second. The two will only be the same if the time resolution of the light curve is exactly 1 second.
Whether the input data is in counts per bin or in count rate can be toggled via the boolean input_counts keyword argument. By default, this argument is set to True, and the code assumes the light curve passed into the object is in counts/bin. By setting it to False, the user can pass in count rates:
End of explanation
print(mean_countrate)
print(lc.countrate[:10])
mean_counts = mean_countrate * dt
print(mean_counts)
print(lc.counts[:10])
Explanation: Internally, both counts and countrate attribute will be defined no matter what the user passes in, since they're trivially converted between each other through a multiplication/division with `dt:
End of explanation
times = np.arange(1000)
mean_flux = 100.0 # mean flux
std_flux = 2.0 # standard deviation on the flux
# generate fluxes with a Gaussian distribution and
# an array of associated uncertainties
flux = np.random.normal(loc=mean_flux, scale=std_flux, size=len(times))
flux_err = np.ones_like(flux) * std_flux
lc = Lightcurve(times, flux, err=flux_err, err_dist="gauss", dt=1.0, skip_checks=True)
Explanation: Error Distributions in stingray.Lightcurve
The instruments that record our data impose measurement noise on our measurements. Depending on the type of instrument, the statistical distribution of that noise can be different. stingray was originally developed with X-ray data in mind, where most data comes in the form of photon arrival times, which generate measurements distributed according to a Poisson distribution. By default, err_dist is assumed to Poisson, and this is the only statistical distribution currently fully supported. But you can put in your own errors (via counts_err or countrate_err). It'll produce a warning, and be aware that some of the statistical assumptions made about downstream products (e.g. the normalization of periodograms) may not be correct:
End of explanation
times = np.arange(1000)
counts = np.random.poisson(100, size=len(times))
lc = Lightcurve(times, counts, dt=1, skip_checks=True)
lc.gti
print(times[0]) # first time stamp in the light curve
print(times[-1]) # last time stamp in the light curve
print(lc.gti) # the GTIs generated within Lightcurve
gti = [(0, 500), (600, 1000)]
lc = Lightcurve(times, counts, dt=1, skip_checks=True, gti=gti)
print(lc.gti)
Explanation: Good Time Intervals
Lightcurve (and most other core stingray classes) support the use of Good Time Intervals (or GTIs), which denote the parts of an observation that are reliable for scientific purposes. Often, GTIs introduce gaps (e.g. where the instrument was off, or affected by solar flares). By default. GTIs are passed and don't apply to the data within a Lightcurve object, but become relevant in a number of circumstances, such as when generating Powerspectrum objects.
If no GTIs are given at instantiation of the Lightcurve class, an artificial GTI will be created spanning the entire length of the data set being passed in:
End of explanation
lc = Lightcurve(times, counts, dt=1, skip_checks=True)
lc_rand = Lightcurve(np.arange(1000), [500]*1000, dt=1, skip_checks=True)
lc_sum = lc + lc_rand
print("Counts in light curve 1: " + str(lc.counts[:5]))
print("Counts in light curve 2: " + str(lc_rand.counts[:5]))
print("Counts in summed light curve: " + str(lc_sum.counts[:5]))
Explanation: We'll get back to these when we talk more about some of the methods that apply GTIs to the data.
Operations
Addition/Subtraction
Two light curves can be summed up or subtracted from each other if they have same time arrays.
End of explanation
lc_neg = -lc
lc_sum = lc + lc_neg
np.all(lc_sum.counts == 0) # All the points on lc and lc_neg cancel each other
Explanation: Negation
A negation operation on the lightcurve object inverts the count array from positive to negative values.
End of explanation
lc[120]
Explanation: Indexing
Count value at a particular time can be obtained using indexing.
End of explanation
lc_sliced = lc[100:200]
len(lc_sliced.counts)
Explanation: A Lightcurve can also be sliced to generate a new object.
End of explanation
lc_1 = lc
lc_2 = Lightcurve(np.arange(1000, 2000), np.random.rand(1000)*1000, dt=1, skip_checks=True)
lc_long = lc_1.join(lc_2, skip_checks=True) # Or vice-versa
print(len(lc_long))
Explanation: Methods
Concatenation
Two light curves can be combined into a single object using the join method. Note that both of them must not have overlapping time arrays.
End of explanation
lc_cut = lc_long.truncate(start=0, stop=1000)
len(lc_cut)
Explanation: Truncation
A light curve can also be truncated.
End of explanation
lc_cut = lc_long.truncate(start=500, stop=1500, method='time')
lc_cut.time[0], lc_cut.time[-1]
Explanation: Note : By default, the start and stop parameters are assumed to be given as indices of the time array. However, the start and stop values can also be given as time values in the same value as the time array.
End of explanation
lc_rebinned = lc_long.rebin(2)
print("Old time resolution = " + str(lc_long.dt))
print("Number of data points = " + str(lc_long.n))
print("New time resolution = " + str(lc_rebinned.dt))
print("Number of data points = " + str(lc_rebinned.n))
Explanation: Re-binning
The time resolution (dt) can also be changed to a larger value.
Note : While the new resolution need not be an integer multiple of the previous time resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.
End of explanation
new_lc_long = lc_long[:] # Copying into a new object
new_lc_long = new_lc_long.sort(reverse=True)
new_lc_long.time[0] == max(lc_long.time)
Explanation: Sorting
A lightcurve can be sorted using the sort method. This function sorts time array and the counts array is changed accordingly.
End of explanation
new_lc = lc_long[:]
new_lc = new_lc.sort_counts()
new_lc.counts[-1] == max(lc_long.counts)
Explanation: You can sort by the counts array using sort_counts method which changes time array accordingly:
End of explanation
lc.plot()
Explanation: Plotting
A curve can be plotted with the plot method.
End of explanation
lc.plot(labels=('Time', "Counts"), # (xlabel, ylabel)
axis=(0, 1000, -50, 150), # (xmin, xmax, ymin, ymax)
title="Random generated lightcurve",
marker='c:') # c is for cyan and : is the marker style
Explanation: A plot can also be customized using several keyword arguments.
End of explanation
lc.plot(marker = 'k', save=True, filename="lightcurve.png")
Explanation: The figure drawn can also be saved in a file using keywords arguments in the plot method itself.
End of explanation
from stingray import sampledata
lc = sampledata.sample_data()
lc.plot()
Explanation: Note : See utils.savefig function for more options on saving a file.
Sample Data
Stingray also has a sample Lightcurve data which can be imported from within the library.
End of explanation
time = np.hstack([np.arange(0, 10, 0.1), np.arange(10, 20, 0.3)]) # uneven time resolution
counts = np.random.poisson(100, size=len(time))
lc = Lightcurve(time, counts, dt=1.0, skip_checks=True)
lc.check_lightcurve()
Explanation: Checking the Light Curve for Irregularities
You can perform checks on the behaviour of the light curve, similar to what's done when instantiating a Lightcurve object when skip_checks=False, by calling the relevant method:
End of explanation
gti = [(10, 100), (20, 30, 40), ((1, 2), (3, 4, (5, 6)))] # not a well-behaved GTI
lc = Lightcurve(time, counts, dt=0.1, skip_checks=True, gti=gti)
lc.check_lightcurve()
Explanation: Let's add some badly formatted GTIs:
End of explanation
mjdref = 91254
time = np.arange(1000)
counts = np.random.poisson(100, size=len(time))
lc = Lightcurve(time, counts, dt=1, skip_checks=True, mjdref=mjdref)
print(lc.mjdref)
mjdref_new = 91254 + 20
lc_new = lc.change_mjdref(mjdref_new)
print(lc_new.mjdref)
Explanation: MJDREF and Shifting Times
The mjdref keyword argument defines a reference time in Modified Julian Date. Often, X-ray missions count their internal time in seconds from a given reference date and time (so that numbers don't become arbitrarily large). The data is then in the format of Mission Elapsed Time (MET), or seconds since that reference time.
mjdref is generally passed into the Lightcurve object at instantiation, but it can be changed later:
End of explanation
gti = [(0,500), (600, 1000)]
lc.gti = gti
print("first three time bins: " + str(lc.time[:3]))
print("GTIs: " + str(lc.gti))
time_shift = 10.0
lc_shifted = lc.shift(time_shift)
print("Shifted first three time bins: " + str(lc_shifted.time[:3]))
print("Shifted GTIs: " + str(lc_shifted.gti))
Explanation: This change only affects the reference time, not the values given in the time attribute. However, it is also possible to shift the entire light curve, along with its GTIs:
End of explanation
# make a time array with a big gap and a small gap
time = np.array([1, 2, 3, 10, 11, 12, 13, 14, 17, 18, 19, 20])
counts = np.random.poisson(100, size=len(time))
lc = Lightcurve(time, counts, skip_checks=True)
lc.gti
Explanation: Calculating a baseline
TODO: Need to document this method
Working with GTIs and Splitting Light Curves
It is possible to split light curves into multiple segments. In particular, it can be useful to split light curves with large gaps into individual contiguous segments without gaps.
End of explanation
lc_split = lc.split(min_gap=2*lc.dt)
for lc_tmp in lc_split:
print(lc_tmp.time)
Explanation: This light curve has uneven bins. It has a large gap between 3 and 10, and a smaller gap between 14 and 17. We can use the split method to split it into three contiguous segments:
End of explanation
lc_split = lc.split(min_gap=6.0)
for lc_tmp in lc_split:
print(lc_tmp.time)
Explanation: This has split the light curve into three contiguous segments. You can adjust the tolerance for the size of gap that's acceptable via the min_gap attribute. You can also require a minimum number of data points in the output light curves. This is helpful when you're only interested in contiguous segments of a certain length:
End of explanation
lc_split = lc.split(min_gap=6.0, min_points=4)
for lc_tmp in lc_split:
print(lc_tmp.time)
Explanation: What if we only want the long segment?
End of explanation
# make a time array with a big gap and a small gap
time = np.arange(20)
counts = np.random.poisson(100, size=len(time))
gti = [(0,8), (12,20)]
lc = Lightcurve(time, counts, dt=1, skip_checks=True, gti=gti)
lc_split = lc.split_by_gti()
for lc_tmp in lc_split:
print(lc_tmp.time)
Explanation: A special case of splitting your light curve object is to split by GTIs. This can be helpful if you want to look at individual contiguous segments separately:
End of explanation
# make a time array with a big gap and a small gap
time = np.arange(20)
counts = np.random.poisson(100, size=len(time))
gti = [(0,8), (12,20)]
lc = Lightcurve(time, counts, dt=1, skip_checks=True, gti=gti)
Explanation: Because I'd passed in GTIs that define the range from 0-8 and from 12-20 as good time intervals, the light curve will be split into two individual ones containing all data points falling within these ranges.
You can also apply the GTIs directly to the original light curve, which will filter time, counts, countrate, counts_err and countrate_err to only fall within the bounds of the GTIs:
End of explanation
# time array before applying GTIs:
lc.time
lc.apply_gtis()
# time array after applying GTIs
lc.time
Explanation: Caution: This is one of the few methods that change the original state of the object, rather than returning a new copy of it with the changes applied! So any events falling outside of the range of the GTIs will be lost:
End of explanation
dt=1.0
time = np.arange(0, 100, dt)
counts = np.random.poisson(100, size=len(time))
lc = Lightcurve(time, counts, dt=dt, skip_checks=True)
min_total_counts = 300
min_total_bins = 2
estimated_chunk_length = lc.estimate_chunk_length(min_total_counts, min_total_bins)
print("The estimated length of each segment in seconds to satisfy both conditions is: " + str(estimated_chunk_length))
Explanation: As you can see, the time bins 8-12 have been dropped, since they fall outside of the GTIs.
Analyzing Light Curve Segments
There's some functionality in stingray aimed at making analysis of individual light curve segments (or chunks, as they're called throughout the code) efficient.
One helpful function tells you the length that segments should have to satisfy two conditions: (1) the minimum number of time bins in the segment, and (2) the minimum total number of counts (or flux) in each segment.
Let's give this a try with an example:
End of explanation
start_times, stop_times, lc_sums = lc.analyze_lc_chunks(chunk_length = 10.0, func=np.median)
lc_sums
Explanation: So we have time bins of 1 second time resolution, each with an average of 100 counts/bin. We require at least 2 time bins in each segment, and also a minimum number of total counts in the segment of 300. In theory, you'd expect to need 3 time bins (so 3-second segments) to satisfy the condition above. However, the Poisson distribution is quite variable, so we cannot guarantee that all bins will have a total number of counts above 300. Hence, our segments need to be 4 seconds long.
We can now use these segments to do some analysis, using the analyze_by_chunks method. In the simplest, case we can use a standard numpy operation to learn something about the properties of each segment:
End of explanation
def myfunc(lc):
Not a very interesting function
return np.sum(lc.counts) * 10.0
start_times, stop_times, lc_result = lc.analyze_lc_chunks(chunk_length=10.0, func=myfunc)
lc_result
Explanation: This splits the light curve into 10-second segments, and then finds the median number of counts/bin in each segment. For a flat light curve like the one we generated above, this isn't super interesting, but this method can be helpful for more complex analyses. Instead of np.median, you can also pass in your own function:
End of explanation
import lightkurve
lc_new = lc.to_lightkurve()
type(lc_new)
lc_new.time
lc_new.flux
Explanation: Compatibility with Lightkurve
The Lightkurve package provides a large amount of complementary functionality to stingray, in particular for data observed with Kepler and TESS, stars and exoplanets, and unevenly sampled data. We have implemented a conversion method that converts to/from stingray's native Lightcurve object and Lightkurve's native LightCurve object. Equivalent functionality exists in Lightkurve, too.
End of explanation
lc_back = lc_new.to_stingray()
lc_back.time
lc_back.counts
Explanation: Let's do the rountrip to stingray:
End of explanation
dt=1.0
time = np.arange(0, 100, dt)
counts = np.random.poisson(100, size=len(time))
lc = Lightcurve(time, counts, dt=dt, skip_checks=True)
# convet to astropy.TimeSeries object
ts = lc.to_astropy_timeseries()
type(ts)
ts[:10]
Explanation: Similarly, we can transform Lightcurve objects to and from astropy.TimeSeries objects:
End of explanation |
8,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Benchmarks of different version of Cross Correlations
Author
Step1: Table of Values
In the below table, I compare four different methods for implementing cross correlation.
NoGpuSupport - this is OpenCV's normal implementation of cross correlation using the matchTemplate function.
GpuSupport - This is OpenCV's GPU implementation of the matchTemplate function.
Decompose - This is somewhat of a dummy test that demonstrates the maximum speed that overlap and add could work. This does not include any addtions or memory copies. It just calculates how long it takes to compute the cross correlation multiple times based on what value the image would be broke up. i.e. if the L value is set to 512, there would be a total of 4 cross correlation operations for a 1024x1024 image.
OverlapAdd - This benchmark demonstrates my implementation of overlap and add for a 2D signal.
In the table below, Problem Space is referring to the image size, i.e. 512 means that the image is 512x512.
Step2: Plotted results
In the following plot, I visually demonstrate the statistics for Iterations/sec. These are plotted using log scaling so that smaller values can easily be seen. | Python Code:
from __future__ import print_function
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
Explanation: Benchmarks of different version of Cross Correlations
Author: Cody W. Eilar
In this notebook, I explore speed comparisons of several different methods of implementing cross correlation in C++. All these experiments were done using a kernel that is 17x17.
End of explanation
data = pd.read_csv("./cross_correlation_results.csv")
data[['Experiment', 'Problem Space', 'Baseline', 'Iterations/sec', 'Min (us)', 'Mean (us)',
'Max (us)', 'Standard Deviation']]
Explanation: Table of Values
In the below table, I compare four different methods for implementing cross correlation.
NoGpuSupport - this is OpenCV's normal implementation of cross correlation using the matchTemplate function.
GpuSupport - This is OpenCV's GPU implementation of the matchTemplate function.
Decompose - This is somewhat of a dummy test that demonstrates the maximum speed that overlap and add could work. This does not include any addtions or memory copies. It just calculates how long it takes to compute the cross correlation multiple times based on what value the image would be broke up. i.e. if the L value is set to 512, there would be a total of 4 cross correlation operations for a 1024x1024 image.
OverlapAdd - This benchmark demonstrates my implementation of overlap and add for a 2D signal.
In the table below, Problem Space is referring to the image size, i.e. 512 means that the image is 512x512.
End of explanation
import matplotlib.cm as cm
prob_space = data.groupby('Experiment')
ind = np.arange(len(data.groupby('Problem Space')))
colors = cm.rainbow(np.linspace(0, 1, len(ind)))
fig, ax = plt.subplots()
width = .15;
offset = 0
rects = []
names = []
for (name, group), c in zip(prob_space, colors):
names.append(name)
rects.append(ax.bar(ind +offset, np.log10(group['Iterations/sec']), width, color=c))
offset = offset + width
ax.set_ylabel('log10(Frames per second)')
ax.set_xlabel('Image size in pixels')
ax.set_title('Comparison of Xcorr Methods')
ax.set_xticks(ind + width)
ax.set_xticklabels(data['Problem Space'].unique())
ax.legend(rects, names, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: Plotted results
In the following plot, I visually demonstrate the statistics for Iterations/sec. These are plotted using log scaling so that smaller values can easily be seen.
End of explanation |
8,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
For high dpi displays.
Step1: 0. General note
This example compares pressure calculated from pytheos and original publication for the gold scale by Speiale 2001.
1. Global setup
Step2: 3. Compare | Python Code:
%config InlineBackend.figure_format = 'retina'
Explanation: For high dpi displays.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
from uncertainties import unumpy as unp
import pytheos as eos
Explanation: 0. General note
This example compares pressure calculated from pytheos and original publication for the gold scale by Speiale 2001.
1. Global setup
End of explanation
eta = np.linspace(1., 0.60, 21)
print(eta)
speziale_mgo = eos.periclase.Speziale2001()
speziale_mgo.print_equations()
speziale_mgo.print_equations()
speziale_mgo.print_parameters()
v0 = 74.698
speziale_mgo.three_r
v = v0 * (eta)
temp = 300.
p = speziale_mgo.cal_p(v, temp * np.ones_like(v))
print('for T = ', temp)
for eta_i, p_i in zip(eta, p):
print("{0: .3f} {1: .2f}".format(eta_i, p_i))
v = speziale_mgo.cal_v(p, temp * np.ones_like(p), min_strain=0.6)
print((v/v0))
Explanation: 3. Compare
End of explanation |
8,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Lesson 2
Step2: <img src = "funsyn.jpg">
Modules
A set of related functions can be grouped together as module
A module is nothing but a python file
The open source community continuosly builds modules and makes it available for us
To access these modules we need to use the "import command"
Let us import a commonly used module and access one of its functions.
Step3: It is also possible to import specific functions from within the module directly. When you do this make sure you have not used the same name for your function. | Python Code:
# FUNCTION DEFINITION
def check_if_5(user_number):
This function just checks if the number passed to it is equal
to 5. It returns 1 if the number is 5 and returns 0 if the number is not 5
if user_number == 5:
return 1
else:
return 0
#FUNCTION CALL
return_val = check_if_5(5)
if return_val == 1:
print("It is 5")
else:
print("It is not 5")
Explanation: Lesson 2: Functions and Modules
Functions
Block of organized, reusable code that is used to perform a single, related action
Provide better modularity for your application and a high degree of code reusing
Python gives you many built-in functions like print(), len() etc..
It is also possible to define user-defined functions
Global vs. local variables
Write a small function that checks if the number entered by the user is 5
End of explanation
import random
print('A random number between 1 and 6 is: ',random.randint(1,6))
Explanation: <img src = "funsyn.jpg">
Modules
A set of related functions can be grouped together as module
A module is nothing but a python file
The open source community continuosly builds modules and makes it available for us
To access these modules we need to use the "import command"
Let us import a commonly used module and access one of its functions.
End of explanation
from random import randint
print('A random number between 1 and 6 is: ',randint(1,6))
Explanation: It is also possible to import specific functions from within the module directly. When you do this make sure you have not used the same name for your function.
End of explanation |
8,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get Notebook from github.com and other source.
by openthings@163.com, 2016-04.
通用的Notebook更新维护的工具。
原始URL列表保存在文本文件git_list.txt中。
git_list.txt转为git_list.md,在GitBook中使用。
git_list.txt转为git_list.ipynb,在Jupyter中使用。
Step1: URL地址列表读入字符串变量中。
<font color="red">注意,为了避免太长,只显示了前面指定个数的字符。</font>
Step2: 分解字符串到名称和url。
Step4: 保存到Markdown文件。
Step5: 抓取git库中文件到本地。如果已经存在,则git pull,否则git clone.
<font color="red">使用了IPython的!魔法操作符来执行shell操作。</font> | Python Code:
from pprint import *
Explanation: Get Notebook from github.com and other source.
by openthings@163.com, 2016-04.
通用的Notebook更新维护的工具。
原始URL列表保存在文本文件git_list.txt中。
git_list.txt转为git_list.md,在GitBook中使用。
git_list.txt转为git_list.ipynb,在Jupyter中使用。
End of explanation
url_str = open("git_list.txt").read()
print(url_str[0:300] + "\n\n......")
Explanation: URL地址列表读入字符串变量中。
<font color="red">注意,为了避免太长,只显示了前面指定个数的字符。</font>
End of explanation
url_line = url_str.split("#")
url_list = []
for url in url_line:
url2 = url.strip().split("\n")
if len(url2)>1:
uname = url2[0]
ugit = url2[1]
url_dict = {"uname":uname,"ugit":ugit}
url_list.append(url_dict)
print("Total:",len(url_list))
pprint(url_list[0:3])
#print(uname,"\n",ugit)
Explanation: 分解字符串到名称和url。
End of explanation
flist = open("git_list.md","w+")
flist.write(
## IPython Notebook Tutorial and Skills open source...
##### by [openthings@163.com](http://my.oschina.net/u/2306127/blog?catalog=3420733), 2016-04.
)
for d in url_list:
flist.write("##### " + d["uname"] + "\n")
flist.write("[" + d["ugit"] + "]" + "(" + d["ugit"] + ")\n")
flist.close()
print("Writed url list to file: url_list.md")
Explanation: 保存到Markdown文件。
End of explanation
import os
import os.path
index = 0
for d in url_list:
index += 1
print("\n",index,":\t",d["uname"],"\n==>>\t",d["ugit"])
git_path = os.path.split(d["ugit"])
git_name = git_path[1].split(".")[0]
#print(git_name)
if os.path.exists(git_name):
print("\t Existed, git pull:",git_name," ...")
! cd $git_name && git pull
else:
print("Git clone ......")
ucmd = "git clone " + d["ugit"]
#print(ucmd)
! $ucmd
print("Finished.")
Explanation: 抓取git库中文件到本地。如果已经存在,则git pull,否则git clone.
<font color="red">使用了IPython的!魔法操作符来执行shell操作。</font>
End of explanation |
8,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
La Magia de la television
Capitulo 3
Step1: En la realidad, todos sabemos que los premios se eligen no en base a la realidad y los votos, sino en base a quien pone mas plata para comprarlos. Pero bueno, suponiendo que esto es asi, podriamos automatizar el proceso de eleccion de ganadores.
Vamos a encarar este problema diciendo que las nominaciones estan en una matriz, donde las columnas son categorias, las filas son las posiciones de nominacion y cada celda es una tupla de la forma (Nombre del nominado, puntaje segun los votos). Planteado de esta forma, el problema de encontrar los ganadores se transforma en encontrar el maximo de cada columna de dicha matriz.
Para hacer esto un poco mas realista, vamos a decir que las posiciones de nominacion sirven para desempatar en caso que dos o mas nominados tengan el mismo puntaje. Si no fueran nominados y fueran solo numeros, seria indistinto.
Supongamos que el proceso de nominacion ocurre de la forma
Step3: Para ver como funciona esto paso a paso en la memoria, podemos verlo directamente aqui
Step4: Es mucho muy importante recordar que HAY QUE HACER UNA COPIA DE LA LISTA PARA TRABAJAR. En los siguientes enlaces se puede ver como se comporta todo si hago una copia o si no (y rompo todo)
Con copia
Sin copia
Parte 2
Step6: Los patovicas suelen ser una version menos glamorosa de Kevin Costner en El Guardaespaldas (si, puse otra vez el link por si no lo vieron en el titulo), y tienen como tarea encargarse de controlar quien entra y quien no a, por ejemplo, una entrega de premios.
De alguna forma podemos simular esto haciendo una funcion que dadas
Step7: Bueno, si, whatever, es obvio que la funcion lo que hace a fin de cuentas es la interseccion entre 2 conjuntos (para mas informacion, prestar mas atencion en algebra de conjuntos en el CBC). Si bien el ejercicio es bastante trivial en algun punto, recuerden lo que discutimos sobre el hecho de que sean secuencias a secas | Python Code:
Image(filename='./clase-16-04_images/img1.jpg')
Explanation: La Magia de la television
Capitulo 3: Todo termina con un premio
Parte 1: Los premios son toda una mentira
End of explanation
PRIMER_NOMINADO = 0
SEGUNDO_NOMINADO = 1
TERCER_NOMINADO = 2
CUARTO_NOMINADO = 3
ANIME = 0
NOVELA_ARGENTINA = 1
NOVELA_KOREANA = 2
NOMBRE = 0
PUNTAJE = 1
mis_nominados = [
[None]*3,
[None]*3,
[None]*3,
[None]*3,
]
mis_nominados
def nominar(nominados):
nominados[PRIMER_NOMINADO][ANIME] = ("Persona 5: The Animation",10)
nominados[SEGUNDO_NOMINADO][ANIME] = ("Steins;Gate 0",9.9)
nominados[TERCER_NOMINADO][ANIME] = ("Tokyo Ghoul: re",8)
nominados[CUARTO_NOMINADO][ANIME] = ("Magical Girl Site", 8)
nominados[PRIMER_NOMINADO][NOVELA_ARGENTINA] = ("El Sultan",0)
nominados[SEGUNDO_NOMINADO][NOVELA_ARGENTINA] = ("Simona",5)
nominados[TERCER_NOMINADO][NOVELA_ARGENTINA] = ("Golpe al corazon", 2)
nominados[CUARTO_NOMINADO][NOVELA_ARGENTINA] = ("Ojos que no ven", 4)
nominados[PRIMER_NOMINADO][NOVELA_KOREANA] = ("Another Miss Oh",9)
nominados[SEGUNDO_NOMINADO][NOVELA_KOREANA] = ("Beating Again",9.9)
nominados[TERCER_NOMINADO][NOVELA_KOREANA] = ("Goodbye Mr Black",9.9)
nominados[CUARTO_NOMINADO][NOVELA_KOREANA] = ("Nine Time Travels", 8)
mis_nominados
nominar(mis_nominados)
mis_nominados
Explanation: En la realidad, todos sabemos que los premios se eligen no en base a la realidad y los votos, sino en base a quien pone mas plata para comprarlos. Pero bueno, suponiendo que esto es asi, podriamos automatizar el proceso de eleccion de ganadores.
Vamos a encarar este problema diciendo que las nominaciones estan en una matriz, donde las columnas son categorias, las filas son las posiciones de nominacion y cada celda es una tupla de la forma (Nombre del nominado, puntaje segun los votos). Planteado de esta forma, el problema de encontrar los ganadores se transforma en encontrar el maximo de cada columna de dicha matriz.
Para hacer esto un poco mas realista, vamos a decir que las posiciones de nominacion sirven para desempatar en caso que dos o mas nominados tengan el mismo puntaje. Si no fueran nominados y fueran solo numeros, seria indistinto.
Supongamos que el proceso de nominacion ocurre de la forma:
* Se parte de una matriz vacia
* Hay una funcion a la que se le pasa la matriz y esta la llena con los nominados
End of explanation
def obtener_ganadores(nominados):
Recibe una matriz como lista de listas donde cada fila es un orden de nominacion y cada columna una categoria.
Devuelve una lista con los ganadores de cada categoria (maximo puntaje de cada columna)
maximos= nominados[0][:]
for fila in nominados:
for categoria,nominado in enumerate(fila):
_,puntaje = nominado
_,puntaje_anterior = maximos[categoria]
if puntaje > puntaje_anterior:
maximos[categoria] = nominado
return maximos
obtener_ganadores(mis_nominados)
Explanation: Para ver como funciona esto paso a paso en la memoria, podemos verlo directamente aqui: https://goo.gl/1UpYhM
Ahora si, teniendo la matriz, podemos programar la funcion:
End of explanation
Image(filename='./clase-16-04_images/img2.jpg')
Explanation: Es mucho muy importante recordar que HAY QUE HACER UNA COPIA DE LA LISTA PARA TRABAJAR. En los siguientes enlaces se puede ver como se comporta todo si hago una copia o si no (y rompo todo)
Con copia
Sin copia
Parte 2: Ya te dije que yo si estoy en la lista
Titulo alternativo: I Will Always Love You
End of explanation
def interseccion(secuencia_a,secuencia_b):
Devuelve una lista con los elementos de la secuencia a que se encuentran en la secuencia b, sin repetir
resultado = []
for elemento in secuencia_a:
if elemento not in secuencia_b:
continue
if elemento not in resultado:
resultado.append(elemento)
return resultado
interseccion([1,2,3],[2,2,2,3])
Explanation: Los patovicas suelen ser una version menos glamorosa de Kevin Costner en El Guardaespaldas (si, puse otra vez el link por si no lo vieron en el titulo), y tienen como tarea encargarse de controlar quien entra y quien no a, por ejemplo, una entrega de premios.
De alguna forma podemos simular esto haciendo una funcion que dadas:
* Una lista de personas que quieren entrar
* Una lista de personas que pueden entrar
devuelva una lista con las personas que efectivamente entraron. Es obvio que alguien no puede entrar 2 veces, asi que la lista de resultados no deberia tener repetidos.
Ya que estamos, hagamoslo general para dos secuencias cualesquiera, total, no vamos a usar nada que no pueda hacer una secuencia
End of explanation
interseccion(["Hola","Chau"] , "Hola como estas")
Explanation: Bueno, si, whatever, es obvio que la funcion lo que hace a fin de cuentas es la interseccion entre 2 conjuntos (para mas informacion, prestar mas atencion en algebra de conjuntos en el CBC). Si bien el ejercicio es bastante trivial en algun punto, recuerden lo que discutimos sobre el hecho de que sean secuencias a secas:
End of explanation |
8,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 7</font>
Download
Step1: Missão
Step2: Informações Sobre os Consumidores
Step3: Análise Geral de Compras
Step4: Análise Demográfica
Step5: Informações Demográficas Por Gênero
Step6: Análise de Compras Por Gênero
Step7: Consumidores Mais Populares (Top 5)
Step8: Itens Mais Populares
Step9: Itens Mais Lucrativos | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 7</font>
Download: http://github.com/dsacademybr
End of explanation
# Imports
import pandas as pd
import numpy as np
# Carrega o arquivo
load_file = "dados_compras.json"
purchase_file = pd.read_json(load_file, orient = "records")
purchase_file.head()
Explanation: Missão: Analisar o Comportamento de Compra de Consumidores.
Nível de Dificuldade: Alto
Você recebeu a tarefa de analisar os dados de compras de um web site! Os dados estão no formato JSON e disponíveis junto com este notebook.
No site, cada usuário efetua login usando sua conta pessoal e pode adquirir produtos à medida que navega pela lista de produtos oferecidos. Cada produto possui um valor de venda. Dados de idade e sexo de cada usuário foram coletados e estão fornecidos no arquivo JSON.
Seu trabalho é entregar uma análise de comportamento de compra dos consumidores. Esse é um tipo de atividade comum realizado por Cientistas de Dados e o resultado deste trabalho pode ser usado, por exemplo, para alimentar um modelo de Machine Learning e fazer previsões sobre comportamentos futuros.
Mas nesta missão você vai analisar o comportamento de compra dos consumidores usando o pacote Pandas da linguagem Python e seu relatório final deve incluir cada um dos seguintes itens:
Contagem de Consumidores
Número total de consumidores
Análise Geral de Compras
Número de itens exclusivos
Preço médio de compra
Número total de compras
Rendimento total
Informações Demográficas Por Gênero
Porcentagem e contagem de compradores masculinos
Porcentagem e contagem de compradores do sexo feminino
Porcentagem e contagem de outros / não divulgados
Análise de Compras Por Gênero
Número de compras
Preço médio de compra
Valor Total de Compra
Compras for faixa etária
Identifique os 5 principais compradores pelo valor total de compra e, em seguida, liste (em uma tabela):
Login
Número de compras
Preço médio de compra
Valor Total de Compra
Itens mais populares
Identifique os 5 itens mais populares por contagem de compras e, em seguida, liste (em uma tabela):
ID do item
Nome do item
Número de compras
Preço do item
Valor Total de Compra
Itens mais lucrativos
Identifique os 5 itens mais lucrativos pelo valor total de compra e, em seguida, liste (em uma tabela):
ID do item
Nome do item
Número de compras
Preço do item
Valor Total de Compra
Como considerações finais:
Seu script deve funcionar para o conjunto de dados fornecido.
Você deve usar a Biblioteca Pandas e o Jupyter Notebook.
End of explanation
# Implemente aqui sua solução
Explanation: Informações Sobre os Consumidores
End of explanation
# Implemente aqui sua solução
Explanation: Análise Geral de Compras
End of explanation
# Implemente aqui sua solução
Explanation: Análise Demográfica
End of explanation
# Implemente aqui sua solução
Explanation: Informações Demográficas Por Gênero
End of explanation
# Implemente aqui sua solução
Explanation: Análise de Compras Por Gênero
End of explanation
# Implemente aqui sua solução
Explanation: Consumidores Mais Populares (Top 5)
End of explanation
# Implemente aqui sua solução
Explanation: Itens Mais Populares
End of explanation
# Implemente aqui sua solução
Explanation: Itens Mais Lucrativos
End of explanation |
8,351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Defining inputs
Need to define some heterogenous factors of production...
Step1: Note that we are shifting the distributions of worker skill and firm productivity to the right by 1.0 in order to try and avoid issues with having workers (firms) with near zero skill (productivity).
Step2: Defining a production process
Next need to define some production process...
Step3: Define a boundary value problem
Step4: Pick some collocation solver
Step5: Compute some decent initial guess
Currently I guess that $\mu(x)$ is has the form...
$$ \hat{\mu}(x) = \beta_0 + \beta_1 f(x) $$
(i.e., a linear translation) of some function $f$. Using my $\hat{\mu}(x)$, I can then back out a guess for $\theta(x)$ implied by the model...
$$ \hat{\theta}(x) = \frac{H(x)}{\hat{\mu}'(x)} $$
Step6: Solve the model!
Step7: Plot some results
Step8: Plot factor payments
Note the factor_payment_1 is wages and factor_payment_2 is profits...
Step9: Plot firm size against wages and profits
Step10: Plot the density for firm size
As you can see, the theta function is hump-shaped. Nothing special, but when calculating the pdf some arrangements have to be done for this
Step11: Distributions of factor payments
Can plot the distributions of average factor payments...
Step12: Widget | Python Code:
# define some workers skill
x, loc1, mu1, sigma1 = sym.var('x, loc1, mu1, sigma1')
skill_cdf = 0.5 + 0.5 * sym.erf((sym.log(x - loc1) - mu1) / sym.sqrt(2 * sigma1**2))
skill_params = {'loc1': 1e0, 'mu1': 0.0, 'sigma1': 1.0}
workers = pyam.Input(var=x,
cdf=skill_cdf,
params=skill_params,
bounds=(1.2, 1e1), # guesses for the alpha and (1 - alpha) quantiles!
alpha=0.005,
measure=15.0 # 15x more workers than firms
)
# define some firms
y, loc2, mu2, sigma2 = sym.var('y, loc2, mu2, sigma2')
productivity_cdf = 0.5 + 0.5 * sym.erf((sym.log(y - loc2) - mu2) / sym.sqrt(2 * sigma2**2))
productivity_params = {'loc2': 1e0, 'mu2': 0.0, 'sigma2': 1.0}
firms = pyam.Input(var=y,
cdf=productivity_cdf,
params=productivity_params,
bounds=(1.2, 1e1), # guesses for the alpha and (1 - alpha) quantiles!
alpha=0.005,
measure=1.0
)
Explanation: Defining inputs
Need to define some heterogenous factors of production...
End of explanation
xs = np.linspace(workers.lower, workers.upper, 1e4)
plt.plot(xs, workers.evaluate_pdf(xs))
plt.xlabel('Worker skill, $x$', fontsize=20)
plt.show()
Explanation: Note that we are shifting the distributions of worker skill and firm productivity to the right by 1.0 in order to try and avoid issues with having workers (firms) with near zero skill (productivity).
End of explanation
# define symbolic expression for CES between x and y
omega_A, sigma_A = sym.var('omega_A, sigma_A')
A = ((omega_A * x**((sigma_A - 1) / sigma_A) +
(1 - omega_A) * y**((sigma_A - 1) / sigma_A))**(sigma_A / (sigma_A - 1)))
# define symbolic expression for CES between x and y
r, l, omega_B, sigma_B = sym.var('r, l, omega_B, sigma_B')
B = l**omega_B * r**(1 - omega_B)
F = A * B
# positive assortativity requires that sigma_A * sigma_B < 1
F_params = {'omega_A':0.25, 'omega_B':0.5, 'sigma_A':0.5, 'sigma_B':1.0 }
Explanation: Defining a production process
Next need to define some production process...
End of explanation
problem = pyam.AssortativeMatchingProblem(assortativity='positive',
input1=workers,
input2=firms,
F=F,
F_params=F_params)
Explanation: Define a boundary value problem
End of explanation
solver = pycollocation.OrthogonalPolynomialSolver(problem)
Explanation: Pick some collocation solver
End of explanation
initial_guess = pyam.OrthogonalPolynomialInitialGuess(solver)
initial_polys = initial_guess.compute_initial_guess("Chebyshev",
degrees={'mu': 75, 'theta': 75},
f=lambda x, alpha: x**alpha,
alpha=0.25)
# quickly plot the initial conditions
xs = np.linspace(workers.lower, workers.upper, 1000)
plt.plot(xs, initial_polys['mu'](xs))
plt.plot(xs, initial_polys['theta'](xs))
plt.grid('on')
Explanation: Compute some decent initial guess
Currently I guess that $\mu(x)$ is has the form...
$$ \hat{\mu}(x) = \beta_0 + \beta_1 f(x) $$
(i.e., a linear translation) of some function $f$. Using my $\hat{\mu}(x)$, I can then back out a guess for $\theta(x)$ implied by the model...
$$ \hat{\theta}(x) = \frac{H(x)}{\hat{\mu}'(x)} $$
End of explanation
domain = [workers.lower, workers.upper]
initial_coefs = {'mu': initial_polys['mu'].coef,
'theta': initial_polys['theta'].coef}
solver.solve(kind="Chebyshev",
coefs_dict=initial_coefs,
domain=domain,
method='hybr')
solver.result.success
Explanation: Solve the model!
End of explanation
viz = pyam.Visualizer(solver)
viz.interpolation_knots = np.linspace(workers.lower, workers.upper, 1000)
viz.residuals.plot()
plt.show()
viz.normalized_residuals[['mu', 'theta']].plot(logy=True)
plt.show()
viz.solution.tail()
viz.solution[['mu', 'theta']].plot(subplots=True)
plt.show()
viz.solution[['Fxy', 'Fyl']].plot()
plt.show()
Explanation: Plot some results
End of explanation
viz.solution[['factor_payment_1', 'factor_payment_2']].plot(subplots=True)
plt.show()
Explanation: Plot factor payments
Note the factor_payment_1 is wages and factor_payment_2 is profits...
End of explanation
fig, axes = plt.subplots(1, 2, sharey=True)
axes[0].scatter(viz.solution.factor_payment_1, viz.solution.theta, alpha=0.5,
edgecolor='none')
axes[0].set_ylim(0, 1.05 * viz.solution.theta.max())
axes[0].set_xlabel('Wages, $w$')
axes[0].set_ylabel(r'Firm size, $\theta$')
axes[1].scatter(viz.solution.factor_payment_2, viz.solution.theta, alpha=0.5,
edgecolor='none')
axes[1].set_xlabel(r'Profits, $\pi$')
plt.show()
# to get correlation just use pandas!
viz.solution.corr()
# or a subset
viz.solution[['theta', 'factor_payment_1']].corr()
# or actual values!
viz.solution.corr().loc['theta']['factor_payment_1']
Explanation: Plot firm size against wages and profits
End of explanation
fig, axes = plt.subplots(1, 3)
theta_pdf = viz.compute_pdf('theta', normalize=True)
theta_pdf.plot(ax=axes[0])
axes[0].set_xlabel(r'Firm size, $\theta$')
axes[0].set_title(r'pdf')
theta_cdf = viz.compute_cdf(theta_pdf)
theta_cdf.plot(ax=axes[1])
axes[1].set_title(r'cdf')
axes[1].set_xlabel(r'Firm size, $\theta$')
theta_sf = viz.compute_sf(theta_cdf)
theta_sf.plot(ax=axes[2])
axes[2].set_title(r'sf')
axes[2].set_xlabel(r'Firm size, $\theta$')
plt.tight_layout()
plt.show()
Explanation: Plot the density for firm size
As you can see, the theta function is hump-shaped. Nothing special, but when calculating the pdf some arrangements have to be done for this: sort the thetas preserving the order (so we can relate them to their xs) and then use carefully the right x for calculating the pdf.
The principle of Philipp's trick is:
$pdf_x(x_i)$ can be interpreted as number of workers with ability x. $\theta_i$ is the size of the firms that employs workers of kind $x_i$. As all firms that match with workers type $x_i$ choose the same firm size, $pdf_x(x_i)/\theta_i$ is the number of firms of size $\theta_i$.
Say there are 100 workers with ability $x_i$, and their associated firm size $\theta_i$ is 2. Then there are $100/2 = 50$ $ \theta_i$ firms
End of explanation
fig, axes = plt.subplots(1, 3)
factor_payment_1_pdf = viz.compute_pdf('factor_payment_1', normalize=True)
factor_payment_1_pdf.plot(ax=axes[0])
axes[0].set_title(r'pdf')
factor_payment_1_cdf = viz.compute_cdf(factor_payment_1_pdf)
factor_payment_1_cdf.plot(ax=axes[1])
axes[1].set_title(r'cdf')
factor_payment_1_sf = viz.compute_sf(factor_payment_1_cdf)
factor_payment_1_sf.plot(ax=axes[2])
axes[2].set_title(r'sf')
plt.tight_layout()
plt.show()
fig, axes = plt.subplots(1, 3)
factor_payment_2_pdf = viz.compute_pdf('factor_payment_2', normalize=True)
factor_payment_2_pdf.plot(ax=axes[0])
axes[0].set_title(r'pdf')
factor_payment_2_cdf = viz.compute_cdf(factor_payment_2_pdf)
factor_payment_2_cdf.plot(ax=axes[1])
axes[1].set_title(r'cdf')
factor_payment_2_sf = viz.compute_sf(factor_payment_2_cdf)
factor_payment_2_sf.plot(ax=axes[2])
axes[2].set_title(r'sf')
plt.tight_layout()
plt.show()
Explanation: Distributions of factor payments
Can plot the distributions of average factor payments...
End of explanation
from IPython.html import widgets
def interactive_plot(viz, omega_A=0.25, omega_B=0.5, sigma_A=0.5, sigma_B=1.0,
loc1=1.0, mu1=0.0, sigma1=1.0, loc2=1.0, mu2=0.0, sigma2=1.0):
# update new parameters as needed
new_F_params = {'omega_A': omega_A, 'omega_B': omega_B,
'sigma_A': sigma_A, 'sigma_B': sigma_B}
viz.solver.problem.F_params = new_F_params
new_input1_params = {'loc1': loc1, 'mu1': mu1, 'sigma1': sigma1}
viz.solver.problem.input1.params = new_input1_params
new_input2_params = {'loc2': loc2, 'mu2': mu2, 'sigma2': sigma2}
viz.solver.problem.input2.params = new_input2_params
# solve the model using a hotstart initial guess
domain = [viz.solver.problem.input1.lower, viz.solver.problem.input1.upper]
initial_coefs = viz.solver._coefs_array_to_dict(viz.solver.result.x, viz.solver.degrees)
viz.solver.solve(kind="Chebyshev",
coefs_dict=initial_coefs,
domain=domain,
method='hybr')
if viz.solver.result.success:
viz._Visualizer__solution = None # should not need to access this!
viz.interpolation_knots = np.linspace(domain[0], domain[1], 1000)
viz.solution[['mu', 'theta']].plot(subplots=True)
viz.normalized_residuals[['mu', 'theta']].plot(logy=True)
else:
print("Foobar!")
viz_widget = widgets.fixed(viz)
# widgets for the model parameters
eps = 1e-2
omega_A_widget = widgets.FloatSlider(value=0.25, min=eps, max=1-eps, step=eps,
description=r"$\omega_A$")
sigma_A_widget = widgets.FloatSlider(value=0.5, min=eps, max=1-eps, step=eps,
description=r"$\sigma_A$")
omega_B_widget = widgets.FloatSlider(value=0.5, min=eps, max=1-eps, step=eps,
description=r"$\omega_B$")
sigma_B_widget = widgets.fixed(1.0)
# widgets for input distributions
loc_widget = widgets.fixed(1.0)
mu_1_widget = widgets.FloatSlider(value=0.0, min=-1.0, max=1.0, step=eps,
description=r"$\mu_1$")
mu_2_widget = widgets.FloatSlider(value=0.0, min=-1.0, max=1.0, step=eps,
description=r"$\mu_2$")
sigma_1_widget = widgets.FloatSlider(value=1.0, min=eps, max=2-eps, step=eps,
description=r"$\sigma_1$")
sigma_2_widget = widgets.FloatSlider(value=1.0, min=eps, max=2-eps, step=eps,
description=r"$\sigma_2$")
widgets.interact(interactive_plot, viz=viz_widget, omega_A=omega_A_widget,
sigma_A=sigma_A_widget, omega_B=omega_B_widget,
sigma_B=sigma_B_widget, sigma1=sigma_1_widget,
loc1=loc_widget, mu1 = mu_1_widget,
loc2=loc_widget, sigma2=sigma_2_widget, mu2 = mu_2_widget)
# widget is changing the parameters of the underlying solver
solver.result.x
Explanation: Widget
End of explanation |
8,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MECA653
Step1: 2 - Quelle est la poportion Homme/Femme impliquée dans les accidents ? Représenter le résultat sous forme graphique.
Step2: 2 - Quelle est la poportion des accidents ayant eu lieu le jour, la nuit ou a l'aube/crépuscule? Représenter le résultat sous forme graphique.
Step3: 3- Position géographique | Python Code:
dfc = pd.read_csv('./DATA/caracteristiques_2016.csv')
dfu = pd.read_csv('./DATA/usagers_2016.csv')
dfl = pd.read_csv('./DATA/lieux_2016.csv')
df = pd.concat([dfu, dfc, dfl], axis=1)
dfc.tail()
dfu.head()
dfl.tail()
df.head()
df = pd.concat([df, dfl], axis=1)
df.head()
Explanation: MECA653: Traitement de donnée - Analyse de la base de donnée de la sécurité routière
L'objectif ici est d'analyser les données fournies par le ministère de l'intérieure sur les accidents de la route resencés en 2016.
Le Module Panda sera largement utilisé.
Sources
Lien vers data.gouv.fr :
https://www.data.gouv.fr/fr/datasets/base-de-donnees-accidents-corporels-de-la-circulation/#_
Documentation de la base de donnée :
DATA/Description_des_bases_de_donnees_ONISR_-Annees_2005_a_2016.pdf
1 - Charger les bases de donnée
End of explanation
# methode pas propre
(h,c)=df[df.sexe==1].shape
(f,c)=df[df.sexe==2].shape
(t,c)=df.shape
print('h/t=', h/t)
print('f/t=', f/t)
# methode panda
df["sexe"].value_counts(normalize=True)
fig = plt.figure()
df[df.grav==2].sexe.value_counts(normalize=True).plot.pie(labels=['Homme', 'Femme'], colors= ['r', 'g'], autopct='%.2f')
Explanation: 2 - Quelle est la poportion Homme/Femme impliquée dans les accidents ? Représenter le résultat sous forme graphique.
End of explanation
dlum = df["lum"].value_counts(normalize=True)
dlum = dlum.sort_index()
dlum
dlum[3] = dlum[3:5].sum()
fig = plt.figure()
dlum[1:3].plot.pie(labels=['Jour','Aube/crépuscule', 'Nuit'], colors= ['y', 'g' , 'b'], autopct='%.2f')
Explanation: 2 - Quelle est la poportion des accidents ayant eu lieu le jour, la nuit ou a l'aube/crépuscule? Représenter le résultat sous forme graphique.
End of explanation
df.lat=df.lat/100000
df.long=df.long/100000
dfp = df[df.gps=='M']
dfp = dfp[['lat','long']]
dfp = dfp[(dfp.long!=0.0) & (dfp.lat!=0.0)]
dfp.head()
#fig = plt.figure()
dfp.plot.scatter(x='long', y='lat',s=1);
df[(df.long!=0.0) & (df.lat!=0.0) & (df.gps=='M')].plot.scatter(x='long', y='lat',s=.5);
Explanation: 3- Position géographique
End of explanation |
8,353 | Given the following text description, write Python code to implement the functionality described.
Description:
Given an array arr of integers, find the minimum number of elements that
need to be changed to make the array palindromic. A palindromic array is an array that
is read the same backwards and forwards. In one change, you can change one element to any other element.
For example:
smallest_change([1,2,3,5,4,7,9,6]) == 4
smallest_change([1, 2, 3, 4, 3, 2, 2]) == 1
smallest_change([1, 2, 3, 2, 1]) == 0
| Python Code:
def smallest_change(arr):
ans = 0
for i in range(len(arr) // 2):
if arr[i] != arr[len(arr) - i - 1]:
ans += 1
return ans |
8,354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
字典中 to yield 表示产出和让步,对于 Python 生成器中的 yield 来说,这是成立的,yield item 这行代码会产生一个值,提供给 next(...) 调用方,此外,还会做出让步,暂停执行生成器,让调用方继续工作,直到需要使用另一个值再调用 next()。调用方会从生成器中拉取值
语法上来说,协程和生成器类似,都是定义体中包含 yield 关键字的函数,可是,在协程中,yield 通常出现在表达式的右边,可以产出值,也可以不产出 -- 如果 yield 关键字后面没有表达式,那么生成器产出 None,协程可能会从调用方接收数据,不过调用方把数据提供给协程使用的是 .send(datum) 方法,而不是 next(...) 函数。通常调用方会把值推送给协程
yield 关键字甚至还可以不接收或传出数据,不过数据如何流动,yield 都是一种流程控制工具,使用它可以实现协作式多任务:协程可以把控制器让步给中心调度程序,从而激活其他的协程
从根本上把 yield 视作控制流程的方式,这样就好理解协程了
本书前面介绍生成器函数作用不大,但是进行一系列功能改进后,得到了 Python 协程。了解 Python 协程有助于理解各个阶段的改进的功能和复杂度
本章覆盖以下话题:
生成器作为协程使用时的行为和状态
使用装饰器自动预激协程
调用方如何使用生成器对象的 .close() 和 .throw(...) 方法控制协程
协程终止时如何返回值
yield from 新语法的用途和语义
使用案例 -- 使用协程管理仿真系统中的并发活动
生成器如何进化成协程
在 Python 2.5 中实现了 yield 关键字可以在表达式中使用,并在生成器 API 中增加了 .send(value) 方法。生成器的调用可以使用 .send(...) 方法发送数据,发送的数据会称为生成器函数中 yield 表达式的值,因此生成器可以作为协程使用,协程指的是一个过程,这个过程与调用方协作,产出由调用方提供的值
除了 .send(...) 方法,还加了 throw(...) 和 .close() 方法,在本章后面讲解
在 Python 3.3 中,对生成器函数语法做了两处改动,以便更好的作为协程使用
现在,生成器可以返回一个值,以前如果在生成器中给 return 语句提供值,会抛出 SyntaxError 异常
新引入了 yield from 语法,可以把复杂的生成器重构成小型嵌套的生成器,省去了之前把生成器的工作委托给予生成器所需的大量样板代码
作为协程的生成器基本行为
Step1: 协程可以处于 4 个状态中的一个。当前状态可以使用 inspect.getgeneratorstate(...) 函数确定,该函数会返回下面字符串中的一个
'GEN_CREATED' 等待开始执行
'GEN_RUNNING' 解释器正在执行(只有在多线程才能看到这个状态。此外,生成器对象在自己身上调用 getgeneratorstate 函数也行,就是没啥用)
'GEN_SUSPENDED' 在 yield 表达式处暂停
'GEN_CLOSE' 执行结束
因为 send 方法参数会称为暂停的 yield 表达式的值,所以,仅当协程处于暂停状态时才能调用 send 方法,例如 my_coro.send(42)。不过,如果协程还没激活(即,状态是 'GEN_CREATED'),情况就不同了。因此,始终要调用 next(my_coro) 激活协程 -- 也可以调用 my_coro.send(None),效果一样。
如果创建协程对象后立即把 None 之外的值发给它,会出现下面错误:
Step2: 注意错误描述,描述的很清楚
最先调用 next(my_coro) 函数这一步通常称为 “预激”(prime) 协程(即,让协程向前执行到第一个 yield 表达式,准备好作为活跃的协程使用)。
下面是一个让我们更好理解协程行为的例子:
Step3: 注意这个是产出值的时间
Step4: 使用协程计算移动平均值
下面我们展示如何用协程计算移动平均值:
Step5: 这个无限循环表明,只要调用方不断把值发给协程,它就会一直接收值,然后生成结果。晋档调用方在协程上调用 .close() 方法,或者没有协程的引用而被垃圾回收程序回收时,这个协程才会终止。
这里的 yield 用于暂停协程,把结果发给调用方;还用于接收调用方后面发给协程的值,恢复无线循环
使用协程的好处是 total 和 count 生成为局部变量就好,不用使用闭包
Step6: 我们一定想知道如何停止这个协程,但是在此之前我们先讨论如何启动协程在使用协程之前必须要预激,这一步容易忘记,为了避免忘记,可以在协程上使用一个特殊的装饰器
预激协程的装饰器
如果不预激,那么协程没什么用。调用 my_cor.send(x) 之前,记住一定要调用 next(my_coro)。为了简化协程的用法,有时会使用一个预激装饰器,下面就是一个例子:
Step7: 使用 yield from 语法调用协程也会自动预激(一会讲),因此会和上面的 @coroutine 装饰器不兼容。Python 3.4 标准库中的 asyncio.coroutine 装饰器(第 18 章)不会预激协程,因此能兼容 yield from 方法
终止协程和异常处理
上面的出错原因是 'spam' 不能加到 total 变量上。
上面例子暗示了一种终止协程的方式:发送某个哨符值,让协程退出,内置的 None 和 Ellipsis 等常量通常作为哨符值。 Ellipsis 的优点是,数据流中不太常有这个值。还有人用 StopIteration 类(类本身,而不是实例,也不抛出)作为哨符值,也就是说是这样使用的:my_coro.send(StopIteration)
从 Python 2.5 开始,客户可以在生成器对象上调用两个方法,显式的把异常发给协程,这两个方法是 throw 和 close
generator.throw(exc_type[, exc_value[, traceback]])
致使生成器在暂停的 yield 表达式抛出指定的异常。如果生成器处理了抛出的异常,代码会向前执行到下一个 yield 表达式,而产出的值会成为调用 generator.throw 方法得到的返回值,如果生成器没有处理抛出的异常,异常会向上冒泡,传到调用方的上下文中
generator.close()
致使生成器暂停的 yield 表达式抛出的 GeneratorExit 异常。如果生成器没有处理这个异常,或者抛出了 StopIteration 异常(通常是指运行到结尾),调用方不会报错,如果收到了 GeneratorExit 异常,生成器一定不能产出值,否则解释器会抛出 RuntimeError 异常。生成器抛出的其他异常会向上冒泡,传给调用方
下面展示如何使用 close 和 throw 方法控制协程
Step8: 如果把 DemoException 异常传入 demo_exc_handling 协程,它会处理,然后继续运行,如下:
Step9: 但是,传入协程的异常没有处理,协程会停止,即状态变成 GEN_CLOSE
Step10: 如果不管协程如何结束都想做那些清理工作,要把协程定义体中相关代码放入 try/finally 块中,如下:
Step11: Python 3.3 引入 yield from 结构的主要原因之一与把异常传入嵌套的协程有关。另一个原因是让协程更方便的返回值
让协程返回值
下面的这个 averager 例子会返回结果,为了说明如何返回值,每次激活协程时不会产出移动平均值。这么做是为了强调某些协程不会产出值,而是在最后返回一个值(通常是累计值)
下面 average 返回一个 numedtuple,两个字段分别是项数(count)和平均值(average)
Step12: 下面展示捕获 StopIteration 异常:
Step13: 获取协程返回值虽然要绕个圈子,但是这是 PEP 380 定义的方式,如果我们意识到这一点就说得通了,yield from 结构会在内部自动捕获 StopIteration 异常。这种处理方式与 for 循环处理 StopIteration 异常的方式一样,循环机制使用户用易于理解的方式处理异常,对于 yield from 来说,解释器不仅会捕获 StopIteration 异常,而且还会把 value 属性的值变成 yield from 表达式的值。可惜,我们无法在控制台中使用交互的方式测试这种行为,因为在函数外部使用 yield from(以及 yield)会导致语法出错
使用 yield from
首先要知道 yield from 是全新的语言结构,它的作用要比 yield 多很多,因此人们认为继续使用那个关键字多少会引起误解。在其他语言中,类似的结构用 await 关键字,这个名称好多了,因为它传达了至关重要的一点:在生成器 gen 中使用 yield from subgen() 时,subgen 会获得控制权,把产出值传给 gen 的调用方,即调用方可以直接控制 subgen,于此同时,gen 会阻塞,等待 subgen 终止
14 章说过,yield from 可用于简化 for 循环中的 yield 表达式,例如:
Step14: 可以改写成:
Step15: 在 14 章首次提到 yield from 时举了一个例子,演示这个结构用法:
Step16: yield from x 表达式对 x 做的第一件事是就是,调用 iter(x), 从中获取迭代器。因此 x 可以是任何可迭代对象。
可是,如果 yield from 结构唯一的作用是替代产出值的嵌套 for 循环,这个结构很有可能不会添加到 Python 语言中。yield from 结构本质作用无法通过简单的可迭代对象说明,而且要发散思维,用嵌套的生成器。
yield from 主要功能是打开双向通道,把最外层的调用方与最内层的子生成器连接起来,这样二者可以直接发送可产出值,还可以直接传入异常,而不用在位于中间的协程中添加大量处理异常的样板代码,有了这个结构,协程可以通过以前不可能的方式委托职责
要想使用 yield from 结构,需要大量修改代码,为了说明需要改动的部分,下面是一些专门术语:
委派生成器:包含 yield from <iterable> 表达式的生成器函数
子生成器:从 yield from 表达式中 <iterable> 部分获取的生成器
调用方: 调用委派生成器的客户端代码,在不同语境中,我们会使用客户端代替调用方,以此与委派生成器(也是调用方,因为它调用了子生成器)区分开
委派生成器在 yield from 表达式处暂停,调用方可以直接把数据发给子生成器,自生成器再把产出的值发给调用方。自生成器返回之后,解释器会抛出 StopIteration 异常,并把返回值附加到异常对象上,此时委派生成器会恢复 | Python Code:
def simple_coroutine():
print('-> coroutine started')
# 如果协程只需要从客户那里接收数据,那么产出的值是 None
# 这个值是隐式指定的,因为 yield 关键字右面没有表达式
x = yield
print('-> croutine received:', x)
my_coro = simple_coroutine()
my_coro
# 先调用 next(...) 函数,因为生成器还没启动,没在 yield 语句暂停,所以无法发送数据
next(my_coro)
# 协程定义体中的 yield 表达式会计算出 42,现在协程会恢复,
# 一直运行到下一个 yield 表达式或者终止
my_coro.send(42) # 控制权流动到协程定义体末尾,生成器抛出 StopIteration 异常
Explanation: 字典中 to yield 表示产出和让步,对于 Python 生成器中的 yield 来说,这是成立的,yield item 这行代码会产生一个值,提供给 next(...) 调用方,此外,还会做出让步,暂停执行生成器,让调用方继续工作,直到需要使用另一个值再调用 next()。调用方会从生成器中拉取值
语法上来说,协程和生成器类似,都是定义体中包含 yield 关键字的函数,可是,在协程中,yield 通常出现在表达式的右边,可以产出值,也可以不产出 -- 如果 yield 关键字后面没有表达式,那么生成器产出 None,协程可能会从调用方接收数据,不过调用方把数据提供给协程使用的是 .send(datum) 方法,而不是 next(...) 函数。通常调用方会把值推送给协程
yield 关键字甚至还可以不接收或传出数据,不过数据如何流动,yield 都是一种流程控制工具,使用它可以实现协作式多任务:协程可以把控制器让步给中心调度程序,从而激活其他的协程
从根本上把 yield 视作控制流程的方式,这样就好理解协程了
本书前面介绍生成器函数作用不大,但是进行一系列功能改进后,得到了 Python 协程。了解 Python 协程有助于理解各个阶段的改进的功能和复杂度
本章覆盖以下话题:
生成器作为协程使用时的行为和状态
使用装饰器自动预激协程
调用方如何使用生成器对象的 .close() 和 .throw(...) 方法控制协程
协程终止时如何返回值
yield from 新语法的用途和语义
使用案例 -- 使用协程管理仿真系统中的并发活动
生成器如何进化成协程
在 Python 2.5 中实现了 yield 关键字可以在表达式中使用,并在生成器 API 中增加了 .send(value) 方法。生成器的调用可以使用 .send(...) 方法发送数据,发送的数据会称为生成器函数中 yield 表达式的值,因此生成器可以作为协程使用,协程指的是一个过程,这个过程与调用方协作,产出由调用方提供的值
除了 .send(...) 方法,还加了 throw(...) 和 .close() 方法,在本章后面讲解
在 Python 3.3 中,对生成器函数语法做了两处改动,以便更好的作为协程使用
现在,生成器可以返回一个值,以前如果在生成器中给 return 语句提供值,会抛出 SyntaxError 异常
新引入了 yield from 语法,可以把复杂的生成器重构成小型嵌套的生成器,省去了之前把生成器的工作委托给予生成器所需的大量样板代码
作为协程的生成器基本行为
End of explanation
my_coro = simple_coroutine()
my_coro.send(9577)
Explanation: 协程可以处于 4 个状态中的一个。当前状态可以使用 inspect.getgeneratorstate(...) 函数确定,该函数会返回下面字符串中的一个
'GEN_CREATED' 等待开始执行
'GEN_RUNNING' 解释器正在执行(只有在多线程才能看到这个状态。此外,生成器对象在自己身上调用 getgeneratorstate 函数也行,就是没啥用)
'GEN_SUSPENDED' 在 yield 表达式处暂停
'GEN_CLOSE' 执行结束
因为 send 方法参数会称为暂停的 yield 表达式的值,所以,仅当协程处于暂停状态时才能调用 send 方法,例如 my_coro.send(42)。不过,如果协程还没激活(即,状态是 'GEN_CREATED'),情况就不同了。因此,始终要调用 next(my_coro) 激活协程 -- 也可以调用 my_coro.send(None),效果一样。
如果创建协程对象后立即把 None 之外的值发给它,会出现下面错误:
End of explanation
def simple_coro2(a):
print('-> Started a=', a)
b = yield a
print('-> Received: b=', b)
c = yield a + b
print('-> Received: c=', c)
my_coro2 = simple_coro2(14)
from inspect import getgeneratorstate
getgeneratorstate(my_coro2) # 协程未启动
next(my_coro2) # 产出 a 的值,暂停,等待为 b 赋值
getgeneratorstate(my_coro2) # 协程在 yield 表达式暂停
# 把数字 28 给协程,计算 yield 表达式,得到 28,然后把 28 绑定给 b
# 打印 b = 28 消息,产出 a + b 的值 (42),然后协程暂停,等待为 c 赋值
my_coro2.send(28)
my_coro2.send(99)
Explanation: 注意错误描述,描述的很清楚
最先调用 next(my_coro) 函数这一步通常称为 “预激”(prime) 协程(即,让协程向前执行到第一个 yield 表达式,准备好作为活跃的协程使用)。
下面是一个让我们更好理解协程行为的例子:
End of explanation
from IPython.display import Image
Image(filename='yield.png')
Explanation: 注意这个是产出值的时间
End of explanation
def averager():
total =0.0
count = 0
average = None
while True:
term = yield average
total += term
count += 1
average = total / count
Explanation: 使用协程计算移动平均值
下面我们展示如何用协程计算移动平均值:
End of explanation
coro_avg = averager()
next(coro_avg)
coro_avg.send(10)
coro_avg.send(30)
coro_avg.send(5)
Explanation: 这个无限循环表明,只要调用方不断把值发给协程,它就会一直接收值,然后生成结果。晋档调用方在协程上调用 .close() 方法,或者没有协程的引用而被垃圾回收程序回收时,这个协程才会终止。
这里的 yield 用于暂停协程,把结果发给调用方;还用于接收调用方后面发给协程的值,恢复无线循环
使用协程的好处是 total 和 count 生成为局部变量就好,不用使用闭包
End of explanation
from functools import wraps
def coroutine(func):
@wraps(func)
def primer(*args, **kwargs):
# 把装饰器生成器函数替换成这里的 primer 函数
# 调用 primer 函数时,返回预激后的生成器
gen = func(*args, **kwargs) # 调用被装饰的函数,获取生成器对象
next(gen) # 预激生成器
return gen # 返回生成器
return primer
@coroutine
def averager():
total = 0.0
count = 0
average = None
while True:
term = yield average
total += term
count += 1
average = total / count
coro_avg = averager() # 使用 coroutine 装饰的协程,可以立即开始发送值
coro_avg.send(40)
coro_avg.send(50)
coro_avg.send('spam') # 发送的不是数字,导致协程内部有异常抛出
coro_avg.send(60) # 由于协程内部没有异常处理,协程会终止,试图重新激活协程,会抛出 StopIteration 异常
Explanation: 我们一定想知道如何停止这个协程,但是在此之前我们先讨论如何启动协程在使用协程之前必须要预激,这一步容易忘记,为了避免忘记,可以在协程上使用一个特殊的装饰器
预激协程的装饰器
如果不预激,那么协程没什么用。调用 my_cor.send(x) 之前,记住一定要调用 next(my_coro)。为了简化协程的用法,有时会使用一个预激装饰器,下面就是一个例子:
End of explanation
class DemoException(Exception):
'''为这次演示定义的异常类型'''
def demo_exc_handling():
print('-> coroutine started')
while True:
try:
x = yield
except DemoException: # 特别处理 DemoException 异常
print('*** DemoException handled. Continuing...')
else:
print('-> coroutine received: {!r}'.format(x))
# 这一行永远不会执行,因为有未处理异常才会终止循环,而一旦出现未处理异常,协程就会终止
raise RuntimeError('This line should never run')
exc_coro = demo_exc_handling()
next(exc_coro)
exc_coro.send(11)
exc_coro.send(22)
exc_coro.close()
from inspect import getgeneratorstate
getgeneratorstate(exc_coro)
Explanation: 使用 yield from 语法调用协程也会自动预激(一会讲),因此会和上面的 @coroutine 装饰器不兼容。Python 3.4 标准库中的 asyncio.coroutine 装饰器(第 18 章)不会预激协程,因此能兼容 yield from 方法
终止协程和异常处理
上面的出错原因是 'spam' 不能加到 total 变量上。
上面例子暗示了一种终止协程的方式:发送某个哨符值,让协程退出,内置的 None 和 Ellipsis 等常量通常作为哨符值。 Ellipsis 的优点是,数据流中不太常有这个值。还有人用 StopIteration 类(类本身,而不是实例,也不抛出)作为哨符值,也就是说是这样使用的:my_coro.send(StopIteration)
从 Python 2.5 开始,客户可以在生成器对象上调用两个方法,显式的把异常发给协程,这两个方法是 throw 和 close
generator.throw(exc_type[, exc_value[, traceback]])
致使生成器在暂停的 yield 表达式抛出指定的异常。如果生成器处理了抛出的异常,代码会向前执行到下一个 yield 表达式,而产出的值会成为调用 generator.throw 方法得到的返回值,如果生成器没有处理抛出的异常,异常会向上冒泡,传到调用方的上下文中
generator.close()
致使生成器暂停的 yield 表达式抛出的 GeneratorExit 异常。如果生成器没有处理这个异常,或者抛出了 StopIteration 异常(通常是指运行到结尾),调用方不会报错,如果收到了 GeneratorExit 异常,生成器一定不能产出值,否则解释器会抛出 RuntimeError 异常。生成器抛出的其他异常会向上冒泡,传给调用方
下面展示如何使用 close 和 throw 方法控制协程
End of explanation
exc_coro = demo_exc_handling()
next(exc_coro)
exc_coro.send(11)
exc_coro.throw(DemoException)
getgeneratorstate(exc_coro)
Explanation: 如果把 DemoException 异常传入 demo_exc_handling 协程,它会处理,然后继续运行,如下:
End of explanation
exc_coro = demo_exc_handling()
next(exc_coro)
exc_coro.send(11)
exc_coro.throw(ZeroDivisionError)
getgeneratorstate(exc_coro)
Explanation: 但是,传入协程的异常没有处理,协程会停止,即状态变成 GEN_CLOSE:
End of explanation
class DemoException(Exception):
'''为这次演示定义的异常类型'''
def demo_finally():
print('-> coroutine started')
try:
while True:
try:
x = yield
except DemoException: # 特别处理 DemoException 异常
print('*** DemoException handled. Continuing...')
else:
print('-> coroutine received: {!r}'.format(x))
finally:
print('-> coroutine ending')
Explanation: 如果不管协程如何结束都想做那些清理工作,要把协程定义体中相关代码放入 try/finally 块中,如下:
End of explanation
from collections import namedtuple
Result = namedtuple('Result', 'count average')
def averager():
total = 0.0
count = 0
average = None
while True:
term = yield
if term is None: # 为了返回值,协程必须正常停止
break
total += term
count += 1
average = total / count
return Result(count, average) # 返回 namedtuple,在 Python 3.3 之前如果生成器返回值会报错
coro_avg = averager()
next(coro_avg)
coro_avg.send(10) # 这一版不返回值
coro_avg.send(30)
coro_avg.send(6.5)
# 发 None 会终止循环,导致协程结束,一如既往,生成器会抛出 StopIteration 异常,异常对象的 value 属性保存着返回的值
coro_avg.send(None)
Explanation: Python 3.3 引入 yield from 结构的主要原因之一与把异常传入嵌套的协程有关。另一个原因是让协程更方便的返回值
让协程返回值
下面的这个 averager 例子会返回结果,为了说明如何返回值,每次激活协程时不会产出移动平均值。这么做是为了强调某些协程不会产出值,而是在最后返回一个值(通常是累计值)
下面 average 返回一个 numedtuple,两个字段分别是项数(count)和平均值(average)
End of explanation
coro_avg = averager()
next(coro_avg)
coro_avg.send(10)
coro_avg.send(30)
coro_avg.send(6.5)
try:
coro_avg.send(None)
except StopIteration as exc:
result = exc.value
result
Explanation: 下面展示捕获 StopIteration 异常:
End of explanation
def gen():
for c in 'AB':
yield c
for i in range(1, 3):
yield i
list(gen())
Explanation: 获取协程返回值虽然要绕个圈子,但是这是 PEP 380 定义的方式,如果我们意识到这一点就说得通了,yield from 结构会在内部自动捕获 StopIteration 异常。这种处理方式与 for 循环处理 StopIteration 异常的方式一样,循环机制使用户用易于理解的方式处理异常,对于 yield from 来说,解释器不仅会捕获 StopIteration 异常,而且还会把 value 属性的值变成 yield from 表达式的值。可惜,我们无法在控制台中使用交互的方式测试这种行为,因为在函数外部使用 yield from(以及 yield)会导致语法出错
使用 yield from
首先要知道 yield from 是全新的语言结构,它的作用要比 yield 多很多,因此人们认为继续使用那个关键字多少会引起误解。在其他语言中,类似的结构用 await 关键字,这个名称好多了,因为它传达了至关重要的一点:在生成器 gen 中使用 yield from subgen() 时,subgen 会获得控制权,把产出值传给 gen 的调用方,即调用方可以直接控制 subgen,于此同时,gen 会阻塞,等待 subgen 终止
14 章说过,yield from 可用于简化 for 循环中的 yield 表达式,例如:
End of explanation
def gen():
yield from 'AB'
yield from range(1, 3)
list(gen())
Explanation: 可以改写成:
End of explanation
def chain(*iterables):
for it in iterables:
yield from it
s = 'ABC'
t = tuple(range(3))
list(chain(s, t))
Explanation: 在 14 章首次提到 yield from 时举了一个例子,演示这个结构用法:
End of explanation
from collections import namedtuple
from time import sleep
Result = namedtuple('Result', 'count average')
# 子生成器
def averager():
total = 0.0
count = 0
average = None
while True:
# print('11111')
term = yield # main 函数中的客户代码发送的各个值绑定到这里的 term 变量
if term is None: # 很重要的停止条件,不这么做,yield from 调用此协程会永远阻塞
break
total += term
count += 1
average = total / count
return Result(count, average) # 返回的 Result 会成为 grouper 函数中的 yield from 表达式的值
# 委派生成器
def grouper(results, key):
while True: # 每次迭代会新建一个 average 实例,每个实例都是作为协程使用的生成器对象
# grouper 发送的每个值都会经由 yield from 处理,通过管道传给 averager 实例
# grouper 会在 yield from 表达式暂停,等待 averager 实例处理完客户端发来的值
# averager 运行结束后,返回的值绑定到 results[key] 上,while 循环会不断的创建 averager 实例,处理更多的值
results[key] = yield from averager()
# 客户端代码
def main(data):
results = {}
for key, values in data.items():
group = grouper(results, key) # 调用 grouper 得到生成器对象,传给 grouper 的第一个参数用于收集结果,group 作为协程使用
next(group) # 预激
# sleep(60)
for value in values:
#把各个 value 传给 grouper。传入的值最终到达 averager 函数中的 term = yield 那一行:
# grouper 永远不知道传入的值是什么
group.send(value)
# 把 None 传入grouper,导致当前的averager 实例终止,也让 grouper 继续运行,然后再创建一组 averager 实例,处理下一组值
group.send(None) # 重要,没有这行 averager 不会终止,results[key] 不会被赋值就到了下一个 group
#print(results)
report(results)
# 输出报告
def report(results):
for key, result in sorted(results.items()):
group, unit = key.split(';')
print('{:2} {:5} averaging {:.2f}{}'.format(
result.count, group, result.average, unit))
data = {
'girls;kg':
[40.9, 38.5, 44.3, 42.2, 45.2, 41.7, 44.5, 38.0, 40.6, 44.5],
'girls;m':
[1.6, 1.51, 1.4, 1.3, 1.41, 1.39, 1.33, 1.46, 1.45, 1.43],
'boys;kg':
[39.0, 40.8, 43.2, 40.8, 43.1, 38.6, 41.4, 40.6, 36.3],
'boys;m':
[1.38, 1.5, 1.32, 1.25, 1.37, 1.48, 1.25, 1.49, 1.46],
}
main(data)
Explanation: yield from x 表达式对 x 做的第一件事是就是,调用 iter(x), 从中获取迭代器。因此 x 可以是任何可迭代对象。
可是,如果 yield from 结构唯一的作用是替代产出值的嵌套 for 循环,这个结构很有可能不会添加到 Python 语言中。yield from 结构本质作用无法通过简单的可迭代对象说明,而且要发散思维,用嵌套的生成器。
yield from 主要功能是打开双向通道,把最外层的调用方与最内层的子生成器连接起来,这样二者可以直接发送可产出值,还可以直接传入异常,而不用在位于中间的协程中添加大量处理异常的样板代码,有了这个结构,协程可以通过以前不可能的方式委托职责
要想使用 yield from 结构,需要大量修改代码,为了说明需要改动的部分,下面是一些专门术语:
委派生成器:包含 yield from <iterable> 表达式的生成器函数
子生成器:从 yield from 表达式中 <iterable> 部分获取的生成器
调用方: 调用委派生成器的客户端代码,在不同语境中,我们会使用客户端代替调用方,以此与委派生成器(也是调用方,因为它调用了子生成器)区分开
委派生成器在 yield from 表达式处暂停,调用方可以直接把数据发给子生成器,自生成器再把产出的值发给调用方。自生成器返回之后,解释器会抛出 StopIteration 异常,并把返回值附加到异常对象上,此时委派生成器会恢复
End of explanation |
8,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Adversarial example using FGSM
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Let's load the pretrained MobileNetV2 model and the ImageNet class names.
Step3: Original image
Let's use a sample image of a Labrador Retriever by Mirko CC-BY-SA 3.0 from Wikimedia Common and create adversarial examples from it. The first step is to preprocess it so that it can be fed as an input to the MobileNetV2 model.
Step4: Let's have a look at the image.
Step5: Create the adversarial image
Implementing fast gradient sign method
The first step is to create perturbations which will be used to distort the original image resulting in an adversarial image. As mentioned, for this task, the gradients are taken with respect to the image.
Step6: The resulting perturbations can also be visualised.
Step7: Let's try this out for different values of epsilon and observe the resultant image. You'll notice that as the value of epsilon is increased, it becomes easier to fool the network. However, this comes as a trade-off which results in the perturbations becoming more identifiable. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import tensorflow as tf
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (8, 8)
mpl.rcParams['axes.grid'] = False
Explanation: Adversarial example using FGSM
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/generative/adversarial_fgsm"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/adversarial_fgsm.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/adversarial_fgsm.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/adversarial_fgsm.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial creates an adversarial example using the Fast Gradient Signed Method (FGSM) attack as described in Explaining and Harnessing Adversarial Examples by Goodfellow et al. This was one of the first and most popular attacks to fool a neural network.
What is an adversarial example?
Adversarial examples are specialised inputs created with the purpose of confusing a neural network, resulting in the misclassification of a given input. These notorious inputs are indistinguishable to the human eye, but cause the network to fail to identify the contents of the image. There are several types of such attacks, however, here the focus is on the fast gradient sign method attack, which is a white box attack whose goal is to ensure misclassification. A white box attack is where the attacker has complete access to the model being attacked. One of the most famous examples of an adversarial image shown below is taken from the aforementioned paper.
Here, starting with the image of a panda, the attacker adds small perturbations (distortions) to the original image, which results in the model labelling this image as a gibbon, with high confidence. The process of adding these perturbations is explained below.
Fast gradient sign method
The fast gradient sign method works by using the gradients of the neural network to create an adversarial example. For an input image, the method uses the gradients of the loss with respect to the input image to create a new image that maximises the loss. This new image is called the adversarial image. This can be summarised using the following expression:
$$adv_x = x + \epsilon*\text{sign}(\nabla_xJ(\theta, x, y))$$
where
adv_x : Adversarial image.
x : Original input image.
y : Original input label.
$\epsilon$ : Multiplier to ensure the perturbations are small.
$\theta$ : Model parameters.
$J$ : Loss.
An intriguing property here, is the fact that the gradients are taken with respect to the input image. This is done because the objective is to create an image that maximises the loss. A method to accomplish this is to find how much each pixel in the image contributes to the loss value, and add a perturbation accordingly. This works pretty fast because it is easy to find how each input pixel contributes to the loss by using the chain rule and finding the required gradients. Hence, the gradients are taken with respect to the image. In addition, since the model is no longer being trained (thus the gradient is not taken with respect to the trainable variables, i.e., the model parameters), and so the model parameters remain constant. The only goal is to fool an already trained model.
So let's try and fool a pretrained model. In this tutorial, the model is MobileNetV2 model, pretrained on ImageNet.
End of explanation
pretrained_model = tf.keras.applications.MobileNetV2(include_top=True,
weights='imagenet')
pretrained_model.trainable = False
# ImageNet labels
decode_predictions = tf.keras.applications.mobilenet_v2.decode_predictions
# Helper function to preprocess the image so that it can be inputted in MobileNetV2
def preprocess(image):
image = tf.cast(image, tf.float32)
image = tf.image.resize(image, (224, 224))
image = tf.keras.applications.mobilenet_v2.preprocess_input(image)
image = image[None, ...]
return image
# Helper function to extract labels from probability vector
def get_imagenet_label(probs):
return decode_predictions(probs, top=1)[0][0]
Explanation: Let's load the pretrained MobileNetV2 model and the ImageNet class names.
End of explanation
image_path = tf.keras.utils.get_file('YellowLabradorLooking_new.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg')
image_raw = tf.io.read_file(image_path)
image = tf.image.decode_image(image_raw)
image = preprocess(image)
image_probs = pretrained_model.predict(image)
Explanation: Original image
Let's use a sample image of a Labrador Retriever by Mirko CC-BY-SA 3.0 from Wikimedia Common and create adversarial examples from it. The first step is to preprocess it so that it can be fed as an input to the MobileNetV2 model.
End of explanation
plt.figure()
plt.imshow(image[0] * 0.5 + 0.5) # To change [-1, 1] to [0,1]
_, image_class, class_confidence = get_imagenet_label(image_probs)
plt.title('{} : {:.2f}% Confidence'.format(image_class, class_confidence*100))
plt.show()
Explanation: Let's have a look at the image.
End of explanation
loss_object = tf.keras.losses.CategoricalCrossentropy()
def create_adversarial_pattern(input_image, input_label):
with tf.GradientTape() as tape:
tape.watch(input_image)
prediction = pretrained_model(input_image)
loss = loss_object(input_label, prediction)
# Get the gradients of the loss w.r.t to the input image.
gradient = tape.gradient(loss, input_image)
# Get the sign of the gradients to create the perturbation
signed_grad = tf.sign(gradient)
return signed_grad
Explanation: Create the adversarial image
Implementing fast gradient sign method
The first step is to create perturbations which will be used to distort the original image resulting in an adversarial image. As mentioned, for this task, the gradients are taken with respect to the image.
End of explanation
# Get the input label of the image.
labrador_retriever_index = 208
label = tf.one_hot(labrador_retriever_index, image_probs.shape[-1])
label = tf.reshape(label, (1, image_probs.shape[-1]))
perturbations = create_adversarial_pattern(image, label)
plt.imshow(perturbations[0] * 0.5 + 0.5); # To change [-1, 1] to [0,1]
Explanation: The resulting perturbations can also be visualised.
End of explanation
def display_images(image, description):
_, label, confidence = get_imagenet_label(pretrained_model.predict(image))
plt.figure()
plt.imshow(image[0]*0.5+0.5)
plt.title('{} \n {} : {:.2f}% Confidence'.format(description,
label, confidence*100))
plt.show()
epsilons = [0, 0.01, 0.1, 0.15]
descriptions = [('Epsilon = {:0.3f}'.format(eps) if eps else 'Input')
for eps in epsilons]
for i, eps in enumerate(epsilons):
adv_x = image + eps*perturbations
adv_x = tf.clip_by_value(adv_x, -1, 1)
display_images(adv_x, descriptions[i])
Explanation: Let's try this out for different values of epsilon and observe the resultant image. You'll notice that as the value of epsilon is increased, it becomes easier to fool the network. However, this comes as a trade-off which results in the perturbations becoming more identifiable.
End of explanation |
8,356 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
The function below named create_sequences(), given the tokenizer, a maximum sequence length, and the dictionary of all descriptions and photos, will transform the data into input-output pairs of data for training the mod
| Python Code::
# create sequences of images, input sequences and output words for an image
def create_sequences(tokenizer, max_length, descriptions, photos, vocab_size):
X1, X2, y = list(), list(), list()
# walk through each image identifier
for key, desc_list in descriptions.items():
# walk through each description for the image
for desc in desc_list:
# encode the sequence
seq = tokenizer.texts_to_sequences([desc])[0]
# split one sequence into multiple X,y pairs
for i in range(1, len(seq)):
# split into input and output pair
in_seq, out_seq = seq[:i], seq[i]
# pad input sequence
in_seq = pad_sequences([in_seq], maxlen=max_length)[0]
# encode output sequence
out_seq = to_categorical([out_seq], num_classes=vocab_size)[0]
# store
X1.append(photos[key][0])
X2.append(in_seq)
y.append(out_seq)
return array(X1), array(X2), array(y)
|
8,357 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Data Generation
Data is generated from a 2D mixture of Gaussians.
Step2: Plotting
Step3: Models and Training
A multilayer perceptron with the ReLU activation function.
Step4: The loss function for the discriminator is
Step5: The loss function for the generator is
Step6: Perform a training step by first updating the discriminator parameters $\phi$ using the gradient $\nabla_\phi L_D (\phi, \theta)$ and then updating the generator parameters $\theta$ using the gradient $\nabla_\theta L_G (\phi, \theta)$.
Step7: Plot Results
Plot the data and the examples generated by the generator. | Python Code:
!pip install -q flax
from typing import Sequence
import matplotlib.pyplot as plt
import jax
import jax.numpy as jnp
import flax.linen as nn
from flax.training import train_state
import optax
import functools
import scipy as sp
import math
rng = jax.random.PRNGKey(0)
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/gan_mixture_of_gaussians.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This notebook implements a Generative Adversarial Network to fit a synthetic dataset generated from a mixture of Gaussians in 2D.
The code was adapted from the ODEGAN code here: https://github.com/deepmind/deepmind-research/blob/master/ode_gan/odegan_mog16.ipynb. The original notebook was created by Chongli Qin.
Some modifications made by Mihaela Rosca here were also incorporated.
Imports
End of explanation
@functools.partial(jax.jit, static_argnums=(1,))
def real_data(rng, batch_size):
mog_mean = jnp.array(
[
[1.50, 1.50],
[1.50, 0.50],
[1.50, -0.50],
[1.50, -1.50],
[0.50, 1.50],
[0.50, 0.50],
[0.50, -0.50],
[0.50, -1.50],
[-1.50, 1.50],
[-1.50, 0.50],
[-1.50, -0.50],
[-1.50, -1.50],
[-0.50, 1.50],
[-0.50, 0.50],
[-0.50, -0.50],
[-0.50, -1.50],
]
)
temp = jnp.tile(mog_mean, (batch_size // 16 + 1, 1))
mus = temp[0:batch_size, :]
return mus + 0.02 * jax.random.normal(rng, shape=(batch_size, 2))
Explanation: Data Generation
Data is generated from a 2D mixture of Gaussians.
End of explanation
def plot_on_ax(ax, values, contours=None, bbox=None, xlabel="", ylabel="", title="", cmap="Blues"):
kernel = sp.stats.gaussian_kde(values.T)
ax.axis(bbox)
ax.set_aspect(abs(bbox[1] - bbox[0]) / abs(bbox[3] - bbox[2]))
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_xticks([])
ax.set_yticks([])
xx, yy = jnp.mgrid[bbox[0] : bbox[1] : 300j, bbox[2] : bbox[3] : 300j]
positions = jnp.vstack([xx.ravel(), yy.ravel()])
f = jnp.reshape(kernel(positions).T, xx.shape)
cfset = ax.contourf(xx, yy, f, cmap=cmap)
if contours is not None:
x = jnp.arange(-2.0, 2.0, 0.1)
y = jnp.arange(-2.0, 2.0, 0.1)
cx, cy = jnp.meshgrid(x, y)
new_set = ax.contour(
cx, cy, contours.squeeze().reshape(cx.shape), levels=20, colors="k", linewidths=0.8, alpha=0.5
)
ax.set_title(title)
Explanation: Plotting
End of explanation
class MLP(nn.Module):
features: Sequence[int]
@nn.compact
def __call__(self, x):
for feat in self.features[:-1]:
x = jax.nn.relu(nn.Dense(features=feat)(x))
x = nn.Dense(features=self.features[-1])(x)
return x
Explanation: Models and Training
A multilayer perceptron with the ReLU activation function.
End of explanation
@jax.jit
def discriminator_step(disc_state, gen_state, latents, real_examples):
def loss_fn(disc_params):
fake_examples = gen_state.apply_fn(gen_state.params, latents)
real_logits = disc_state.apply_fn(disc_params, real_examples)
fake_logits = disc_state.apply_fn(disc_params, fake_examples)
disc_real = -jax.nn.log_sigmoid(real_logits)
# log(1 - sigmoid(x)) = log_sigmoid(-x)
disc_fake = -jax.nn.log_sigmoid(-fake_logits)
return jnp.mean(disc_real + disc_fake)
disc_loss, disc_grad = jax.value_and_grad(loss_fn)(disc_state.params)
disc_state = disc_state.apply_gradients(grads=disc_grad)
return disc_state, disc_loss
Explanation: The loss function for the discriminator is:
$$L_D(\phi, \theta) = \mathbb{E}{p^*(x)} g(D\phi(x)) + \mathbb{E}{q(z)}h(D\phi(G_\theta(z)))$$
where $g(t) = -\log t$, $h(t) = -\log(1 - t)$ as in the original GAN.
End of explanation
@jax.jit
def generator_step(disc_state, gen_state, latents):
def loss_fn(gen_params):
fake_examples = gen_state.apply_fn(gen_params, latents)
fake_logits = disc_state.apply_fn(disc_state.params, fake_examples)
disc_fake = -jax.nn.log_sigmoid(fake_logits)
return jnp.mean(disc_fake)
gen_loss, gen_grad = jax.value_and_grad(loss_fn)(gen_state.params)
gen_state = gen_state.apply_gradients(grads=gen_grad)
return gen_state, gen_loss
Explanation: The loss function for the generator is:
$$L_G(\phi, \theta) = \mathbb{E}{q(z)} l(D\phi(G_\theta(z))$$
where $l(t) = -\log t$ for the non-saturating generator loss.
End of explanation
@jax.jit
def train_step(disc_state, gen_state, latents, real_examples):
disc_state, disc_loss = discriminator_step(disc_state, gen_state, latents, real_examples)
gen_state, gen_loss = generator_step(disc_state, gen_state, latents)
return disc_state, gen_state, disc_loss, gen_loss
batch_size = 512
latent_size = 32
discriminator = MLP(features=[25, 25, 1])
generator = MLP(features=[25, 25, 2])
# Initialize parameters for the discriminator and the generator
latents = jax.random.normal(rng, shape=(batch_size, latent_size))
real_examples = real_data(rng, batch_size)
disc_params = discriminator.init(rng, real_examples)
gen_params = generator.init(rng, latents)
# Plot real examples
bbox = [-2, 2, -2, 2]
plot_on_ax(plt.gca(), real_examples, bbox=bbox, title="Data")
plt.tight_layout()
plt.savefig("gan_gmm_data.pdf")
plt.show()
# Create train states for the discriminator and the generator
lr = 0.05
disc_state = train_state.TrainState.create(
apply_fn=discriminator.apply, params=disc_params, tx=optax.sgd(learning_rate=lr)
)
gen_state = train_state.TrainState.create(apply_fn=generator.apply, params=gen_params, tx=optax.sgd(learning_rate=lr))
# x and y grid for plotting discriminator contours
x = jnp.arange(-2.0, 2.0, 0.1)
y = jnp.arange(-2.0, 2.0, 0.1)
X, Y = jnp.meshgrid(x, y)
pairs = jnp.stack((X, Y), axis=-1)
pairs = jnp.reshape(pairs, (-1, 2))
# Latents for testing generator
test_latents = jax.random.normal(rng, shape=(batch_size * 10, latent_size))
num_iters = 20001
n_save = 2000
draw_contours = False
history = []
for i in range(num_iters):
rng_iter = jax.random.fold_in(rng, i)
data_rng, latent_rng = jax.random.split(rng_iter)
# Sample minibatch of examples
real_examples = real_data(data_rng, batch_size)
# Sample minibatch of latents
latents = jax.random.normal(latent_rng, shape=(batch_size, latent_size))
# Update both the generator
disc_state, gen_state, disc_loss, gen_loss = train_step(disc_state, gen_state, latents, real_examples)
if i % n_save == 0:
print(f"i = {i}, Discriminator Loss = {disc_loss}, " + f"Generator Loss = {gen_loss}")
# Generate examples using the test latents
fake_examples = gen_state.apply_fn(gen_state.params, test_latents)
if draw_contours:
real_logits = disc_state.apply_fn(disc_state.params, pairs)
disc_contour = -real_logits + jax.nn.log_sigmoid(real_logits)
else:
disc_contour = None
history.append((i, fake_examples, disc_contour, disc_loss, gen_loss))
Explanation: Perform a training step by first updating the discriminator parameters $\phi$ using the gradient $\nabla_\phi L_D (\phi, \theta)$ and then updating the generator parameters $\theta$ using the gradient $\nabla_\theta L_G (\phi, \theta)$.
End of explanation
# Plot generated examples from history
for i, hist in enumerate(history):
iter, fake_examples, contours, disc_loss, gen_loss = hist
plot_on_ax(
plt.gca(),
fake_examples,
contours=contours,
bbox=bbox,
xlabel=f"Disc Loss: {disc_loss:.3f} | Gen Loss: {gen_loss:.3f}",
title=f"Samples at Iteration {iter}",
)
plt.tight_layout()
plt.savefig(f"gan_gmm_iter_{iter}.pdf")
plt.show()
cols = 3
rows = math.ceil((len(history) + 1) / cols)
bbox = [-2, 2, -2, 2]
fig, axs = plt.subplots(rows, cols, figsize=(cols * 3, rows * 3), dpi=200)
axs = axs.flatten()
# Plot real examples
plot_on_ax(axs[0], real_examples, bbox=bbox, title="Data")
# Plot generated examples from history
for i, hist in enumerate(history):
iter, fake_examples, contours, disc_loss, gen_loss = hist
plot_on_ax(
axs[i + 1],
fake_examples,
contours=contours,
bbox=bbox,
xlabel=f"Disc Loss: {disc_loss:.3f} | Gen Loss: {gen_loss:.3f}",
title=f"Samples at Iteration {iter}",
)
# Remove extra plots from the figure
for i in range(len(history) + 1, len(axs)):
axs[i].remove()
plt.tight_layout()
plt.show()
Explanation: Plot Results
Plot the data and the examples generated by the generator.
End of explanation |
8,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Практическое задание к уроку 1 (2 неделя).
Линейная регрессия
Step1: Мы будем работать с датасетом "bikes_rent.csv", в котором по дням записаны календарная информация и погодные условия, характеризующие автоматизированные пункты проката велосипедов, а также число прокатов в этот день. Последнее мы будем предсказывать; таким образом, мы будем решать задачу регрессии.
Знакомство с данными
Загрузите датасет с помощью функции pandas.read_csv в переменную df. Выведите первые 5 строчек, чтобы убедиться в корректном считывании данных
Step2: Для каждого дня проката известны следующие признаки (как они были указаны в источнике данных)
Step3: Блок 1. Ответьте на вопросы (каждый 0.5 балла)
Step4: В выборке есть признаки, коррелирующие с целевым, а значит, задачу можно решать линейными методами.
По графикам видно, что некоторые признаки похожи друг на друга. Поэтому давайте также посчитаем корреляции между вещественными признаками.
Step5: На диагоналях, как и полагается, стоят единицы. Однако в матрице имеются еще две пары сильно коррелирующих столбцов
Step6: Признаки имеют разный масштаб, значит для дальнейшей работы нам лучше нормировать матрицу объекты-признаки.
Проблема первая
Step7: Давайте обучим линейную регрессию на наших данных и посмотрим на веса признаков.
Step8: Мы видим, что веса при линейно-зависимых признаках по модулю значительно больше, чем при других признаках.
Чтобы понять, почему так произошло, вспомним аналитическую формулу, по которой вычисляются веса линейной модели в методе наименьших квадратов
Step9: Проблема вторая
Step10: Визуализируем динамику весов при увеличении параметра регуляризации
Step11: Ответы на следующие вопросы можно давать, глядя на графики или выводя коэффициенты на печать.
Блок 2. Ответьте на вопросы (каждый 0.25 балла)
Step12: Итак, мы выбрали некоторый параметр регуляризации. Давайте посмотрим, какие бы мы выбирали alpha, если бы делили выборку только один раз на обучающую и тестовую, то есть рассмотрим траектории MSE, соответствующие отдельным блокам выборки. | Python Code:
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Практическое задание к уроку 1 (2 неделя).
Линейная регрессия: переобучение и регуляризация
В этом задании мы на примерах увидим, как переобучаются линейные модели, разберем, почему так происходит, и выясним, как диагностировать и контролировать переобучение.
Во всех ячейках, где написан комментарий с инструкциями, нужно написать код, выполняющий эти инструкции. Остальные ячейки с кодом (без комментариев) нужно просто выполнить. Кроме того, в задании требуется отвечать на вопросы; ответы нужно вписывать после выделенного слова "Ответ:".
Напоминаем, что посмотреть справку любого метода или функции (узнать, какие у нее аргументы и что она делает) можно с помощью комбинации Shift+Tab. Нажатие Tab после имени объекта и точки позволяет посмотреть, какие методы и переменные есть у этого объекта.
End of explanation
df = pd.read_csv('bikes_rent.csv')
df.head(5)
Explanation: Мы будем работать с датасетом "bikes_rent.csv", в котором по дням записаны календарная информация и погодные условия, характеризующие автоматизированные пункты проката велосипедов, а также число прокатов в этот день. Последнее мы будем предсказывать; таким образом, мы будем решать задачу регрессии.
Знакомство с данными
Загрузите датасет с помощью функции pandas.read_csv в переменную df. Выведите первые 5 строчек, чтобы убедиться в корректном считывании данных:
End of explanation
fig, axes = plt.subplots(nrows=3, ncols=4, figsize=(15, 10))
for idx, feature in enumerate(df.columns[:-1]):
df.plot(feature, "cnt", subplots=True, kind="scatter", ax=axes[idx / 4, idx % 4])
Explanation: Для каждого дня проката известны следующие признаки (как они были указаны в источнике данных):
* season: 1 - весна, 2 - лето, 3 - осень, 4 - зима
* yr: 0 - 2011, 1 - 2012
* mnth: от 1 до 12
* holiday: 0 - нет праздника, 1 - есть праздник
* weekday: от 0 до 6
* workingday: 0 - нерабочий день, 1 - рабочий день
* weathersit: оценка благоприятности погоды от 1 (чистый, ясный день) до 4 (ливень, туман)
* temp: температура в Цельсиях
* atemp: температура по ощущениям в Цельсиях
* hum: влажность
* windspeed(mph): скорость ветра в милях в час
* windspeed(ms): скорость ветра в метрах в секунду
* cnt: количество арендованных велосипедов (это целевой признак, его мы будем предсказывать)
Итак, у нас есть вещественные, бинарные и номинальные (порядковые) признаки, и со всеми из них можно работать как с вещественными. С номинальныеми признаками тоже можно работать как с вещественными, потому что на них задан порядок. Давайте посмотрим на графиках, как целевой признак зависит от остальных
End of explanation
print('Correlation of DataFrame target column with others:')
df.iloc[:, :-1].corrwith(df.cnt)
Explanation: Блок 1. Ответьте на вопросы (каждый 0.5 балла):
1. Каков характер зависимости числа прокатов от месяца?
* ответ: криволинейная (синусоида с пиком летом)
1. Укажите один или два признака, от которых число прокатов скорее всего зависит линейно
* ответ: temp
Давайте более строго оценим уровень линейной зависимости между признаками и целевой переменной. Хорошей мерой линейной зависимости между двумя векторами является корреляция Пирсона. В pandas ее можно посчитать с помощью двух методов датафрейма: corr и corrwith. Метод df.corr вычисляет матрицу корреляций всех признаков из датафрейма. Методу df.corrwith нужно подать еще один датафрейм в качестве аргумента, и тогда он посчитает попарные корреляции между признаками из df и этого датафрейма.
End of explanation
print('Pairwise correlation of DataFrame columns:')
df[['temp','atemp','hum','windspeed(mph)','windspeed(ms)','cnt']].corr()
Explanation: В выборке есть признаки, коррелирующие с целевым, а значит, задачу можно решать линейными методами.
По графикам видно, что некоторые признаки похожи друг на друга. Поэтому давайте также посчитаем корреляции между вещественными признаками.
End of explanation
print('Mean values of features:')
df.mean()
Explanation: На диагоналях, как и полагается, стоят единицы. Однако в матрице имеются еще две пары сильно коррелирующих столбцов: temp и atemp (коррелируют по своей природе) и два windspeed (потому что это просто перевод одних единиц в другие). Далее мы увидим, что этот факт негативно сказывается на обучении линейной модели.
Напоследок посмотрим средние признаков (метод mean), чтобы оценить масштаб признаков и доли 1 у бинарных признаков.
End of explanation
from sklearn.preprocessing import scale
from sklearn.utils import shuffle
df_shuffled = shuffle(df, random_state=123)
X = scale(df_shuffled[df_shuffled.columns[:-1]])
y = df_shuffled["cnt"]
Explanation: Признаки имеют разный масштаб, значит для дальнейшей работы нам лучше нормировать матрицу объекты-признаки.
Проблема первая: коллинеарные признаки
Итак, в наших данных один признак дублирует другой, и есть еще два очень похожих. Конечно, мы могли бы сразу удалить дубликаты, но давайте посмотрим, как бы происходило обучение модели, если бы мы не заметили эту проблему.
Для начала проведем масштабирование, или стандартизацию признаков: из каждого признака вычтем его среднее и поделим на стандартное отклонение. Это можно сделать с помощью метода scale.
Кроме того, нужно перемешать выборку, это потребуется для кросс-валидации.
End of explanation
from sklearn.linear_model import LinearRegression
# Код 2.1 (1 балл)
# Создайте объект линейного регрессора, обучите его на всех данных и выведите веса модели
# (веса хранятся в переменной coef_ класса регрессора).
# Можно выводить пары (название признака, вес), воспользовавшись функцией zip, встроенной в язык python
# Названия признаков хранятся в переменной df.columns
lr_model = LinearRegression()
lr_model.fit(X, y)
print('LINEAR REGRESSION MODEL')
print('Weight coefficients:')
for item in zip(df_shuffled.columns[:-1], lr_model.coef_.round()):
print item
print('\nIndependent term:')
print(lr_model.intercept_.round())
Explanation: Давайте обучим линейную регрессию на наших данных и посмотрим на веса признаков.
End of explanation
from sklearn.linear_model import Lasso, Ridge
ls_model = Lasso()
ls_model.fit(X, y)
print('LASSO MODEL')
print('Weight coefficients:')
for item in zip(df_shuffled.columns[:-1], ls_model.coef_.round()):
print item
print('\nIndependent term:')
print(ls_model.intercept_.round())
rd_model = Ridge()
rd_model.fit(X, y)
print('RIDGE MODEL')
print('Weight coefficients:')
for item in zip(df_shuffled.columns[:-1], rd_model.coef_.round()):
print item
print('\nIndependent term:')
print(rd_model.intercept_.round())
Explanation: Мы видим, что веса при линейно-зависимых признаках по модулю значительно больше, чем при других признаках.
Чтобы понять, почему так произошло, вспомним аналитическую формулу, по которой вычисляются веса линейной модели в методе наименьших квадратов:
$w = (X^TX)^{-1} X^T y$.
Если в X есть коллинеарные (линейно-зависимые) столбцы, матрица $X^TX$ становится вырожденной, и формула перестает быть корректной. Чем более зависимы признаки, тем меньше определитель этой матрицы и тем хуже аппроксимация $Xw \approx y$. Такая ситуацию называют проблемой мультиколлинеарности, вы обсуждали ее на лекции.
С парой temp-atemp чуть менее коррелирующих переменных такого не произошло, однако на практике всегда стоит внимательно следить за коэффициентами при похожих признаках.
Решение проблемы мультиколлинеарности состоит в регуляризации линейной модели. К оптимизируемому функционалу прибавляют L1 или L2 норму весов, умноженную на коэффициент регуляризации $\alpha$. В первом случае метод называется Lasso, а во втором --- Ridge. Подробнее об этом также рассказано в лекции.
Обучите регрессоры Ridge и Lasso с параметрами по умолчанию и убедитесь, что проблема с весами решилась.
End of explanation
#array of regressors
alphas = np.arange(1, 500, 50)
coefs_lasso = np.zeros((alphas.shape[0], X.shape[1])) # матрица весов размера (число регрессоров) x (число признаков)
coefs_ridge = np.zeros((alphas.shape[0], X.shape[1]))
#training Lasso model for different regressors
for index, item in enumerate(alphas):
ls_model = Lasso(alpha=alphas[index])
ls_model.fit(X, y)
coefs_lasso[index] = ls_model.coef_
#training Ridge model for different regressors
for index, item in enumerate(alphas):
rd_model = Ridge(alpha=alphas[index])
rd_model.fit(X, y)
coefs_ridge[index] = rd_model.coef_
Explanation: Проблема вторая: неинформативные признаки
В отличие от L2-регуляризации, L1 обнуляет веса при некоторых признаках. Объяснение данному факту дается в одной из лекций курса.
Давайте пронаблюдаем, как меняются веса при увеличении коэффициента регуляризации $\alpha$ (в лекции коэффициент при регуляризаторе мог быть обозначен другой буквой).
End of explanation
plt.figure(figsize=(8, 5))
for coef, feature in zip(coefs_lasso.T, df.columns):
plt.plot(alphas, coef, label=feature, color=np.random.rand(3))
plt.legend(loc="upper right", bbox_to_anchor=(1.4, 0.95))
plt.xlabel("alpha")
plt.ylabel("feature weight")
plt.title("Lasso")
plt.figure(figsize=(8, 5))
for coef, feature in zip(coefs_ridge.T, df.columns):
plt.plot(alphas, coef, label=feature, color=np.random.rand(3))
plt.legend(loc="upper right", bbox_to_anchor=(1.4, 0.95))
plt.xlabel("alpha")
plt.ylabel("feature weight")
plt.title("Ridge")
Explanation: Визуализируем динамику весов при увеличении параметра регуляризации:
End of explanation
from sklearn.linear_model import LassoCV
#Training the model with different regressors with LassoCV
alphas = np.arange(1, 100, 5)
lscv_model = LassoCV(alphas=alphas)
lscv_model.fit(X, y)
#Plot MSE(alpha)
plt.title('Mean square error dependence from regressor')
plt.plot(lscv_model.alphas_, np.mean(lscv_model.mse_path_, axis=1))
plt.xlabel('Alpha')
plt.ylabel('MSE')
plt.grid()
plt.axis([0, 100, 780000, 860000])
plt.show()
print('LASSO_CV MODEL')
#Best regressor value
print('\nAlpha = ' + str(lscv_model.alpha_))
#Weight coefficient for the optimal regressor
print('\nWeight coefficients:')
for item in zip(df_shuffled.columns[:-1], lscv_model.coef_.round()):
print item
print('\nIndependent term:')
print(lscv_model.intercept_.round())
Explanation: Ответы на следующие вопросы можно давать, глядя на графики или выводя коэффициенты на печать.
Блок 2. Ответьте на вопросы (каждый 0.25 балла):
1. Какой регуляризатор (Ridge или Lasso) агрессивнее уменьшает веса при одном и том же alpha?
* Ответ: Lasso
1. Что произойдет с весами Lasso, если alpha сделать очень большим? Поясните, почему так происходит.
* Ответ: Все коэффициенты станут равными нулю. Коэффициент регуляризации определяет сложность модели. При его очень большом значении оказывается, что оптимально просто занулить коэффициенты и оставить константную модель, состояющую из свободного члена.
1. Можно ли утверждать, что Lasso исключает один из признаков windspeed при любом значении alpha > 0? А Ridge? Ситается, что регуляризатор исключает признак, если коэффициент при нем < 1e-3.
* Ответ: Да, Lasso исключает. Ridge нет, метод делает веса равными.
1. Какой из регуляризаторов подойдет для отбора неинформативных признаков?
* Ответ: Lasso
Далее будем работать с Lasso.
Итак, мы видим, что при изменении alpha модель по-разному подбирает коэффициенты признаков. Нам нужно выбрать наилучшее alpha.
Для этого, во-первых, нам нужна метрика качества. Будем использовать в качестве метрики сам оптимизируемый функционал метода наименьших квадратов, то есть Mean Square Error.
Во-вторых, нужно понять, на каких данных эту метрику считать. Нельзя выбирать alpha по значению MSE на обучающей выборке, потому что тогда мы не сможем оценить, как модель будет делать предсказания на новых для нее данных. Если мы выберем одно разбиение выборки на обучающую и тестовую (это называется holdout), то настроимся на конкретные "новые" данные, и вновь можем переобучиться. Поэтому будем делать несколько разбиений выборки, на каждом пробовать разные значения alpha, а затем усреднять MSE. Удобнее всего делать такие разбиения кросс-валидацией, то есть разделить выборку на K частей, или блоков, и каждый раз брать одну из них как тестовую, а из оставшихся блоков составлять обучающую выборку.
Делать кросс-валидацию для регрессии в sklearn совсем просто: для этого есть специальный регрессор, LassoCV, который берет на вход список из alpha и для каждого из них вычисляет MSE на кросс-валидации. После обучения (если оставить параметр cv=3 по умолчанию) регрессор будет содержать переменную mse_path_, матрицу размера len(alpha) x k, k = 3 (число блоков в кросс-валидации), содержащую значения MSE на тесте для соответствующих запусков. Кроме того, в переменной alpha_ будет храниться выбранное значение параметра регуляризации, а в coef_, традиционно, обученные веса, соответствующие этому alpha_.
Обратите внимание, что регрессор может менять порядок, в котором он проходит по alphas; для сопоставления с матрицей MSE лучше использовать переменную регрессора alphas_.
End of explanation
#Minimum MSE for every split
for index, item in enumerate(np.min(lscv_model.mse_path_, axis=0)):
print('Train/test split #' + str(index+1))
index_min = lscv_model.mse_path_[:,index].argmin(axis=0)
print('Alpha = ' + str(lscv_model.alphas_[index_min]) + ', MSE = ' + str(round(item, 2)))
#Plot MSE(alpha) for train/test single split (cv = 3)
plt.figure(figsize=(16,4))
lscv_model.mse_path_
plot_number = 0
for index in range(3):
plot_number += 1
plt.subplot(1, 3, plot_number)
plt.plot(lscv_model.alphas_, lscv_model.mse_path_[:, index])
plt.title('MSE dependence from regressor #' + str(index+1))
plt.xlabel('Alpha')
plt.ylabel('MSE')
plt.grid()
plt.axis([0, 100, 740000, 880000])
Explanation: Итак, мы выбрали некоторый параметр регуляризации. Давайте посмотрим, какие бы мы выбирали alpha, если бы делили выборку только один раз на обучающую и тестовую, то есть рассмотрим траектории MSE, соответствующие отдельным блокам выборки.
End of explanation |
8,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Restore the trained network if you need to
Step12: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if p_drop[word] < random.random()]
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix:
You don't actually need to do the matrix multiplication, you just need to select the row in the embedding matrix that corresponds to the input word. Then, the embedding matrix becomes a lookup table, you're looking up a vector the size of the hidden layer that represents the input word.
<img src="assets/word2vec_weight_matrix_lookup_table.png" width=500>
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
8,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Probability theory
Motivation
In machine learning, as in life in general, we deal with uncertainty. This is probably why probability theory has overtaken logic as the leading system of knowledge representation in data science and machine learning.
You can make probability theory as complicated as you like. Professional mathematicians usually work with measure-theoretic probability [MC08], which is outside the scope of this work. Instead, we shall attempt to follow the example of Gerolamo Cardano (1501 — 1576), who used the game of throwing dice to work out some of the key notions in classical (as opposed to measure-theoretic) probability. Cardano had a pretty good insentive
Step2: (Let's say 0 is heads and 1 is tails.)
Similarly, in our roll-of-a-die example, the following are all events
Step3: If we get 4, say, $S$ has not occurred, since $4 \notin S$; $E$ has occurred, since $4 \in E$; $O$ has not occurred, since $4 \notin O$.
When all outcomes are equally likely, and the sample space is finite, the probability of an event $A$ is given by $$\mathbb{P}(A) = \frac{|A|}{|\Omega|},$$ where $|\cdot|$ denotes the number of elements in a given set.
Thus, the probability of the event $E$, "even number shows up" is equal to $$\mathbb{P}(A) = \frac{|E|}{|\Omega|} = \frac{3}{6} = \frac{1}{2}.$$
If Python's random number generator is decent enough, we should get pretty close to this number by simulating die rolls
Step4: Here we have used 100 simulated "rolls". If we used, 1000000, say, we would get even closer to $\frac{1}{2}$ | Python Code:
# Copyright (c) Thalesians Ltd, 2017-2019. All rights reserved
# Copyright (c) Paul Alexander Bilokon, 2017-2019. All rights reserved
# Author: Paul Alexander Bilokon <paul@thalesians.com>
# Version: 1.0 (2019.08.03)
# Email: education@thalesians.com
# Platform: Tested on Windows 10 with Python 3.6
Explanation:
End of explanation
import numpy as np
np.random.seed(42)
np.random.randint(0, 2)
Explanation: Probability theory
Motivation
In machine learning, as in life in general, we deal with uncertainty. This is probably why probability theory has overtaken logic as the leading system of knowledge representation in data science and machine learning.
You can make probability theory as complicated as you like. Professional mathematicians usually work with measure-theoretic probability [MC08], which is outside the scope of this work. Instead, we shall attempt to follow the example of Gerolamo Cardano (1501 — 1576), who used the game of throwing dice to work out some of the key notions in classical (as opposed to measure-theoretic) probability. Cardano had a pretty good insentive: he was notoriously short of money and kept himself solving through gambling (and chess).
Like Cardano, we regard probability theory as a means to an end: the practice of data science, so let's keep things simple.
Objectives
To introduce the random experiment.
To introduce events.
Random experiment
When we toss an unbiased coin, we say that it lands heads up with probability $\frac{1}{2}$ and tails up with probability $\frac{1}{2}$.
Such a coin toss is an example of a random experiment and the set of outcomes of this random experiment is the sample space $\Omega = {h, t}$, where $h$ stands for "heads" and $t$ stands for tails.
What if we toss a coin twice? We could view the two coin tosses as a single random experiment with the sample space $\Omega = {hh, ht, th, tt}$, where $ht$ (for example) denotes "heads on the first toss", "tails on the second toss".
What if, instead of tossing a coin, we roll a die? The sample space for this random experiment is $\Omega = {1, 2, 3, 4, 5, 6}$.
Events
An event, then, is a subset of the sample space. In our example of the two consecutive coin tosses, getting heads on all coin tosses is an event:
$$A = \text{"getting heads on all coin tosses"} = {hh} \subseteq {hh, ht, th, tt} = \Omega.$$
Getting distinct results on the two coin tosses is also an event:
$$D = {ht, th} \subseteq {hh, ht, th, tt} = \Omega.$$
We can simulate a coin toss in Python as follows:
End of explanation
np.random.randint(1, 7)
Explanation: (Let's say 0 is heads and 1 is tails.)
Similarly, in our roll-of-a-die example, the following are all events:
$$S = \text{"six shows up"} = {6} \subseteq {1, 2, 3, 4, 5, 6} = \Omega,$$
$$E = \text{"even number shows up"} = {2, 4, 6} \subseteq {1, 2, 3, 4, 5, 6} = \Omega,$$
$$O = \text{"odd number shows up"} = {1, 3, 5} \subseteq {1, 2, 3, 4, 5, 6} = \Omega.$$
The empty set, $\emptyset = {}$, represents the impossible event, whereas the sample space $\Omega$ itself represents the certain event: one of the numbers $1, 2, 3, 4, 5, 6$ always occurs when a die is rolled, so $\Omega$ always occurs.
We can simulate the roll of a die in Python as follows:
End of explanation
outcomes = np.random.randint(1, 7, 100)
len([x for x in outcomes if x % 2 == 0]) / len(outcomes)
Explanation: If we get 4, say, $S$ has not occurred, since $4 \notin S$; $E$ has occurred, since $4 \in E$; $O$ has not occurred, since $4 \notin O$.
When all outcomes are equally likely, and the sample space is finite, the probability of an event $A$ is given by $$\mathbb{P}(A) = \frac{|A|}{|\Omega|},$$ where $|\cdot|$ denotes the number of elements in a given set.
Thus, the probability of the event $E$, "even number shows up" is equal to $$\mathbb{P}(A) = \frac{|E|}{|\Omega|} = \frac{3}{6} = \frac{1}{2}.$$
If Python's random number generator is decent enough, we should get pretty close to this number by simulating die rolls:
End of explanation
outcomes = np.random.randint(1, 7, 1000000)
len([x for x in outcomes if x % 2 == 0]) / len(outcomes)
Explanation: Here we have used 100 simulated "rolls". If we used, 1000000, say, we would get even closer to $\frac{1}{2}$:
End of explanation |
8,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import
Step1: Reading initial data
Step2: Remove rows with NAN from data
Step3: Add diff_pt and cos(diff_phi)
Step4: Add max, sum among PIDs
Step5: define label = signB * signTrack
if > 0 (same sign) - label 1
if < 0 (different sign) - label 0
Step6: Apply ghost prob cut, leave Same Side (SS) tracks
Step7: Leave not muons, kaons, electrons, pions, protons
Step8: Calculating tagging efficiency ($\epsilon_{tag}$)
$$N (\text{passed selection}) = \sum_{\text{passed selection}} sw_i$$
$$N (\text{all events}) = \sum_{\text{all events}} sw_i,$$
where $sw_i$ - sPLot weight (sWeight for signal)
$$\epsilon_{tag} = \frac{N (\text{passed selection})} {N (\text{all events})}$$
$$\Delta\epsilon_{tag} = \frac{\sqrt{N (\text{passed selection})}} {N (\text{all events})}$$
Step9: Choose most probable B-events
Step10: Define B-like events for training
Events with low sWeight still will be used only to test quality.
Step11: Main idea
Step12: PID pairs scatters
Step13: pt
Step14: count of tracks
Step15: PIDs histograms
Step16: Train to distinguish same sign vs opposite sign
Step17: DT
Step18: Calibration
Step19: Comparison table of different models
Step20: Implementing best tracking | Python Code:
import pandas
import numpy
from folding_group import FoldingGroupClassifier
from rep.data import LabeledDataStorage
from rep.report import ClassificationReport
from rep.report.metrics import RocAuc
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_curve, roc_auc_score
from utils import get_N_B_events, get_events_number, get_events_statistics
Explanation: Import
End of explanation
import root_numpy
data_nan = pandas.DataFrame(root_numpy.root2array('datasets/tracks.root', 'tracks'))
data_nan.head()
event_id_column = 'event_id'
event_id = data_nan.run.apply(str) + '_' + data_nan.event.apply(str)
data_nan[event_id_column] = numpy.unique(event_id, return_inverse=True)[1]
get_events_statistics(data_nan)
get_N_B_events()
Explanation: Reading initial data
End of explanation
data = data_nan.dropna()
len(data_nan), len(data), get_events_statistics(data)
Explanation: Remove rows with NAN from data
End of explanation
# add different between max pt in event and pt for each track
def add_diff_pt(data):
max_pt = group_max(data[event_id_column].values.astype(str), data.partPt.values)
data.loc[:, 'diff_pt'] = max_pt - data['partPt'].values
# max is computing max over tracks in the same event for saome data
def group_max(groups, data):
# computing unique integer id for each group
assert len(groups) == len(data)
_, event_id = numpy.unique(groups, return_inverse=True)
max_over_event = numpy.zeros(max(event_id) + 1) - numpy.inf
numpy.maximum.at(max_over_event, event_id, data)
return max_over_event[event_id]
# add diff pt
add_diff_pt(data)
# add cos(diff_phi)
data.loc[:, 'cos_diff_phi'] = numpy.cos(data.diff_phi.values)
Explanation: Add diff_pt and cos(diff_phi)
End of explanation
from itertools import combinations
PIDs = {'k': data.PIDNNk.values,
'e': data.PIDNNe.values,
'mu': data.PIDNNm.values,
}
for (pid_name1, pid_values1), (pid_name2, pid_values2) in combinations(PIDs.items(), 2):
data.loc[:, 'max_PID_{}_{}'.format(pid_name1, pid_name2)] = numpy.maximum(pid_values1, pid_values2)
data.loc[:, 'sum_PID_{}_{}'.format(pid_name1, pid_name2)] = pid_values1 + pid_values2
Explanation: Add max, sum among PIDs
End of explanation
data.loc[:, 'label'] = (data.signB.values * data.signTrack.values > 0) * 1
','.join(data.columns)
Explanation: define label = signB * signTrack
if > 0 (same sign) - label 1
if < 0 (different sign) - label 0
End of explanation
initial_cut = '(ghostProb < 0.4)'
data = data.query(initial_cut)
ss_selection = (data.IPs.values < 3) * (abs(data.diff_eta.values) < 0.6) * (abs(data.diff_phi.values) < 0.825)
data = data[ss_selection]
get_events_statistics(data)
Explanation: Apply ghost prob cut, leave Same Side (SS) tracks
End of explanation
threshold_kaon = 0.
threshold_muon = 0.
threshold_electron = 0.
threshold_pion = 0.
threshold_proton = 0.
cut_pid = " ( (PIDNNk > {trk}) | (PIDNNm > {trm}) | (PIDNNe > {tre}) | (PIDNNpi > {trpi}) | (PIDNNp > {trp})) "
cut_pid = cut_pid.format(trk=threshold_kaon, trm=threshold_muon, tre=threshold_electron, trpi=threshold_pion,
trp=threshold_proton)
data = data.query(cut_pid)
get_events_statistics(data)
Explanation: Leave not muons, kaons, electrons, pions, protons
End of explanation
N_B_passed = float(get_events_number(data))
tagging_efficiency = N_B_passed / get_N_B_events()
tagging_efficiency_delta = sqrt(N_B_passed) / get_N_B_events()
tagging_efficiency, tagging_efficiency_delta
hist(data.diff_pt.values, bins=100)
pass
Explanation: Calculating tagging efficiency ($\epsilon_{tag}$)
$$N (\text{passed selection}) = \sum_{\text{passed selection}} sw_i$$
$$N (\text{all events}) = \sum_{\text{all events}} sw_i,$$
where $sw_i$ - sPLot weight (sWeight for signal)
$$\epsilon_{tag} = \frac{N (\text{passed selection})} {N (\text{all events})}$$
$$\Delta\epsilon_{tag} = \frac{\sqrt{N (\text{passed selection})}} {N (\text{all events})}$$
End of explanation
_, take_indices = numpy.unique(data[event_id_column], return_index=True)
figure(figsize=[15, 5])
subplot(1, 2, 1)
hist(data.Bmass.values[take_indices], bins=100)
title('B mass hist')
xlabel('mass')
subplot(1, 2, 2)
hist(data.N_sig_sw.values[take_indices], bins=100, normed=True)
title('sWeights hist')
xlabel('signal sWeights')
plt.savefig('img/Bmass_SS.png' , format='png')
Explanation: Choose most probable B-events
End of explanation
sweight_threshold = 1.
data_sw_passed = data[data.N_sig_sw > sweight_threshold]
data_sw_not_passed = data[data.N_sig_sw <= sweight_threshold]
get_events_statistics(data_sw_passed)
_, take_indices = numpy.unique(data_sw_passed[event_id_column], return_index=True)
figure(figsize=[15, 5])
subplot(1, 2, 1)
hist(data_sw_passed.Bmass.values[take_indices], bins=100)
title('B mass hist for sWeight > 1 selection')
xlabel('mass')
subplot(1, 2, 2)
hist(data_sw_passed.N_sig_sw.values[take_indices], bins=100, normed=True)
title('sWeights hist for sWeight > 1 selection')
xlabel('signal sWeights')
plt.savefig('img/Bmass_selected_SS.png' , format='png')
hist(data_sw_passed.diff_pt.values, bins=100)
pass
Explanation: Define B-like events for training
Events with low sWeight still will be used only to test quality.
End of explanation
features = list(set(data.columns) - {'index', 'run', 'event', 'i', 'signB', 'signTrack', 'N_sig_sw', 'Bmass', 'mult',
'PIDNNp', 'PIDNNpi', 'label', 'thetaMin', 'Dist_phi', event_id_column,
'mu_cut', 'e_cut', 'K_cut', 'ID', 'diff_phi'})
features
Explanation: Main idea:
find tracks, which can help reconstruct the sign of B if you know track sign.
label = signB * signTrack
* the highest output means that this is same sign B as track
* the lowest output means that this is opposite sign B than track
Define features
End of explanation
figure(figsize=[15, 16])
bins = 60
step = 3
for i, (feature1, feature2) in enumerate(combinations(['PIDNNk', 'PIDNNm', 'PIDNNe', 'PIDNNp', 'PIDNNpi'], 2)):
subplot(4, 3, i + 1)
Z, (x, y) = numpy.histogramdd(data_sw_passed[[feature1, feature2]].values, bins=bins, range=([0, 1], [0, 1]))
pcolor(log(Z), vmin=0)
xlabel(feature1)
ylabel(feature2)
xticks(arange(bins, step), x[::step]), yticks(arange(bins, step), y[::step])
plt.savefig('img/PID_selected_SS.png' , format='png')
Explanation: PID pairs scatters
End of explanation
hist(data_sw_passed.diff_pt.values, bins=60, normed=True)
pass
Explanation: pt
End of explanation
_, n_tracks = numpy.unique(data_sw_passed[event_id_column], return_counts=True)
hist(n_tracks, bins=100)
title('Number of tracks')
plt.savefig('img/tracks_number_SS.png' , format='png')
Explanation: count of tracks
End of explanation
figure(figsize=[15, 4])
for i, column in enumerate(['PIDNNm', 'PIDNNe', 'PIDNNk']):
subplot(1, 3, i + 1)
hist(data_sw_passed[column].values, bins=60, range=(0, 1), label=column)
legend()
Explanation: PIDs histograms
End of explanation
from hep_ml.decisiontrain import DecisionTrainClassifier
from hep_ml.losses import LogLossFunction
from rep.estimators import SklearnClassifier
data_sw_passed_lds = LabeledDataStorage(data_sw_passed, data_sw_passed.label, data_sw_passed.N_sig_sw.values)
Explanation: Train to distinguish same sign vs opposite sign
End of explanation
tt_base = DecisionTrainClassifier(learning_rate=0.02, n_estimators=3000, depth=6,
max_features=15, loss=LogLossFunction(regularization=100))
tt_folding = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=11,
train_features=features, group_feature=event_id_column)
%time tt_folding.fit_lds(data_sw_passed_lds)
pass
import cPickle
with open('models/dt_SS.pkl', 'w') as f:
cPickle.dump(tt_folding, f)
comparison_report = ClassificationReport({'tt': tt_folding}, data_sw_passed_lds)
comparison_report.compute_metric(RocAuc())
comparison_report.roc()
lc = comparison_report.learning_curve(RocAuc(), steps=100)
lc
for est in tt_folding.estimators:
est.estimators = est.estimators[:2000]
comparison_report.feature_importance()
Explanation: DT
End of explanation
from utils import get_result_with_bootstrap_for_given_part
models = []
models.append(get_result_with_bootstrap_for_given_part(tagging_efficiency, tagging_efficiency_delta, tt_folding,
[data_sw_passed, data_sw_not_passed], 'tt-iso', logistic=False))
models.append(get_result_with_bootstrap_for_given_part(tagging_efficiency, tagging_efficiency_delta, tt_folding,
[data_sw_passed, data_sw_not_passed], 'tt-log', logistic=True))
Explanation: Calibration
End of explanation
pandas.set_option('display.precision', 8)
result = pandas.concat(models)
result.index = result.name
result.drop('name', axis=1)
Explanation: Comparison table of different models
End of explanation
from utils import prepare_B_data_for_given_part
Bdata_prepared = prepare_B_data_for_given_part(tt_folding, [data_sw_passed, data_sw_not_passed], logistic=True)
Bdata_prepared.to_csv('models/Bdata_tracks_SS.csv', header=True, index=False)
Explanation: Implementing best tracking
End of explanation |
8,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WPS call for analogs detection and visualisation
Step1: There are different ways to call a WPS service. The following cells are examples of the same process execution with different execution settings.
Step2: Analogs Viewer
Another process called 'analogs_viewer' creates interactive charts using dc.js, output as an HTML file, to visualize the analogs_detection output. | Python Code:
##############################
# load the required libraries
#############################
from owslib.wps import WebProcessingService, monitorExecution, printInputOutput
from os import system
import time
#################################################
# connect to the compute provider hosting the WPS
#################################################
wps_url = "http://birdhouse-lsce.extra.cea.fr:8093/wps"
#wps_url = "http://localhost:8093/wps"
wps = WebProcessingService(url=wps_url, verbose=False)
##########################################
# print some information about the service
##########################################
print wps.identification.title + ':'
print '#############'
for process in wps.processes:
print '%s : \t %s' % (process.identifier, process.abstract)
#################################################
# print some information about a specific process
#################################################
# to recieve informations uncomment the follwing lines
#p = wps.describeprocess(identifier='analogs_detection')
#for input in p.dataInputs:
# printInputOutput(input)
# print '\n'
Explanation: WPS call for analogs detection and visualisation
End of explanation
# get information about the call command:
wps.execute?
#####################
# execute the process
#####################
# call asyncon with sleepSecs
start_time = time.time()
execute = wps.execute(
identifier="analogs_detection",
inputs=[("dist",'euclidean')], async=True)
monitorExecution(execute, sleepSecs=1)
print time.time() - start_time, "seconds"
print execute.getStatus()
for o in execute.processOutputs:
print o.reference
#####################
# execute the process
#####################
# call syncron
start_time = time.time()
execute = wps.execute(
identifier="analogs_detection",
inputs=[("dist",'euclidean')], async=False)
print time.time() - start_time, "seconds"
print execute.getStatus()
for o in execute.processOutputs:
print o.reference
# case 1, async
inputs=[("dateSt",'2013-01-01'),("dateEn",'2014-12-31'),("refSt",'1990-01-01'),("refEn",'1995-12-31')]
start_time = time.time()
execute = wps.execute(identifier="analogs_detection", inputs=inputs, async=True)
monitorExecution(execute, sleepSecs=1)
print time.time() - start_time, "seconds"
print execute.getStatus()
for o in execute.processOutputs:
print o.reference
Explanation: There are different ways to call a WPS service. The following cells are examples of the same process execution with different execution settings.
End of explanation
# the output of the previous analogs_detection process
analogs_output = execute.processOutputs[0].reference
print analogs_output
##########################################
# print some information about the process
##########################################
p = wps.describeprocess(identifier='analogs_viewer')
for input in p.dataInputs:
printInputOutput(input)
print '\n'
###########################################
# and execute the process for visualisation
###########################################
viewer = wps.execute(identifier="analogs_viewer",
inputs=[('resource', analogs_output)],
async=False)
print viewer.getStatus()
for o in viewer.processOutputs:
print o.reference
Explanation: Analogs Viewer
Another process called 'analogs_viewer' creates interactive charts using dc.js, output as an HTML file, to visualize the analogs_detection output.
End of explanation |
8,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding new passbands to PHOEBE
In this tutorial we will show you how to add your own passband to PHOEBE. Adding a custom passband involves
Step1: If you plan on computing model atmosphere intensities (as opposed to only blackbody intensities), you will need to download atmosphere tables and unpack them into a local directory of your choice. Keep in mind that this will take a long time. Plan to go for lunch or leave it overnight. The good news is that this needs to be done only once. For the purpose of this document, we will use a local tables/ directory and assume that we are computing intensities for all available model atmospheres
Step2: Getting started
Let us start by importing phoebe, numpy and matplotlib
Step3: Passband transmission function
The passband transmission function is typically a user-provided two-column file. The first column is wavelength, and the second column is passband transmission. For the purposes of this tutorial, we will simulate the passband as a uniform box.
Step4: Let us plot this mock passband transmission function to see what it looks like
Step5: Let us now save these data in a file that we will use to register a new passband.
Step6: Registering a passband
The first step in introducing a new passband into PHOEBE is registering it with the system. We use the Passband class for that.
Step7: The first argument, ptf, is the passband transmission file we just created. Of course, you would provide an actual passband transmission function that comes from a respectable source rather than this silly tutorial.
The next two arguments, pbset and pbname, should be taken in unison. The way PHOEBE refers to passbands is a pbset
Step8: Since we have not computed any tables yet, the list is empty for now. Blackbody functions for computing the lookup tables are built into PHOEBE and you do not need any auxiliary files to generate them. The lookup tables are defined for effective temperatures between 300K and 500,000K. To compute the blackbody response, issue
Step9: Checking the content property again shows that the table has been successfully computed
Step10: We can now test-drive the blackbody lookup table we just created. For this we will use a low-level class method that computes normal emergent passband intensity, Inorm(). For the sake of simplicity, we will turn off limb darkening by setting ld_func to 'linear' and ld_coeffs to '[0.0]'
Step11: Let us now plot a range of temperatures, to make sure that normal emergent passband intensities do what they are supposed to do. While at it, let us compare what we get for the Johnson
Step12: This makes perfect sense
Step13: Note, of course, that you will need to change the path to point to the directory where you unpacked the ck2004 tables. The verbosity parameter verbose will report on the progress as computation is being done. Depending on your computer speed, this step will take up to a minute to complete. We can now check the passband's content attribute again
Step14: Let us now use the same low-level function as before to compare normal emergent passband intensity for our custom passband for blackbody and ck2004 model atmospheres. One other complication is that, unlike blackbody model that depends only on the temperature, the ck2004 model depends on surface gravity (log g) and heavy metal abundances as well, so we need to pass those arrays.
Step15: Quite a difference. That is why using model atmospheres is superior when accuracy is of importance. Next, we need to compute direction-dependent intensities for all our limb darkening and boosting needs. This is a step that takes a long time; depending on your computer speed, it can take a few minutes to complete.
Step16: This step will allow PHOEBE to compute all direction-dependent intensities on the fly, including the interpolation of the limb darkening coefficients that is model-independent. When limb darkening models are preferred (for example, when you don't quite trust direction-dependent intensities from the model atmosphere), we need to calculate two more tables
Step17: This completes the computation of Castelli & Kurucz auxiliary tables.
Computing PHOENIX response
PHOENIX is a 3-D model atmosphere code. Because of that, it is more complex and better behaved for cooler stars (down to ~2300K). The steps to compute PHOENIX intensity tables are analogous to the ones we used for ck2004; so we can do all of them in a single step
Step18: There is one extra step that we need to do for phoenix atmospheres
Step19: Now we can compare all three model atmospheres
Step20: We see that, as temperature increases, model atmosphere intensities can differ quite a bit. That explains why the choice of a model atmosphere is quite important and should be given proper consideration.
Importing Wilson-Devinney response
PHOEBE no longer shares any codebase with the WD code, but for comparison purposes it is sometimes useful to use the same atmosphere tables. If the passband you are registering with PHOEBE has been defined in WD's atmcof.dat and atmcofplanck.dat files, PHOEBE can import those coefficients and use them to compute intensities.
To import a set of WD atmospheric coefficients, you need to know the corresponding index of the passband (you can look it up in the WD user manual available at ftp
Step21: We can consult the content attribute to see the entire set of supported tables, and plot different atmosphere models for comparison purposes
Step22: Still an appreciable difference.
Saving the passband table
The final step of all this (computer's) hard work is to save the passband file so that these steps do not need to be ever repeated. From now on you will be able to load the passband file explicitly and PHOEBE will have full access to all of its tables. Your new passband will be identified as 'Custom | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Adding new passbands to PHOEBE
In this tutorial we will show you how to add your own passband to PHOEBE. Adding a custom passband involves:
downloading and setting up model atmosphere tables;
providing a passband transmission function;
defining and registering passband parameters;
computing blackbody response for the passband;
[optional] computing Castelli & Kurucz (2004) passband tables;
[optional] computing Husser et al. (2013) PHOENIX passband tables;
[optional] if the passband is one of the passbands included in the Wilson-Devinney code, importing the WD response; and
saving the generated passband file.
<!-- * \[optional\] computing Werner et al. (2012) TMAP passband tables; -->
Let's first make sure we have the correct version of PHOEBE installed. Uncomment the following line if running in an online notebook session such as colab.
End of explanation
import phoebe
from phoebe import u
# Register a passband:
pb = phoebe.atmospheres.passbands.Passband(
ptf='my_passband.ptf',
pbset='Custom',
pbname='mypb',
effwl=330,
wlunits=u.nm,
calibrated=True,
reference='A completely made-up passband published in Nowhere (2017)',
version=1.0,
comments='This is my first custom passband'
)
# Blackbody response:
pb.compute_blackbody_response()
# CK2004 response:
pb.compute_ck2004_response(path='tables/ck2004')
pb.compute_ck2004_intensities(path='tables/ck2004')
pb.compute_ck2004_ldcoeffs()
pb.compute_ck2004_ldints()
# PHOENIX response:
pb.compute_phoenix_response(path='tables/phoenix')
pb.compute_phoenix_intensities(path='tables/phoenix')
pb.compute_phoenix_ldcoeffs()
pb.compute_phoenix_ldints()
# Impute missing values from the PHOENIX model atmospheres:
pb.impute_atmosphere_grid(pb._phoenix_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_photon_grid)
for i in range(len(pb._phoenix_intensity_axes[3])):
pb.impute_atmosphere_grid(pb._phoenix_Imu_energy_grid[:,:,:,i,:])
pb.impute_atmosphere_grid(pb._phoenix_Imu_photon_grid[:,:,:,i,:])
# Wilson-Devinney response:
pb.import_wd_atmcof('atmcofplanck.dat', 'atmcof.dat', 22)
# Save the passband:
pb.save('my_passband.fits')
Explanation: If you plan on computing model atmosphere intensities (as opposed to only blackbody intensities), you will need to download atmosphere tables and unpack them into a local directory of your choice. Keep in mind that this will take a long time. Plan to go for lunch or leave it overnight. The good news is that this needs to be done only once. For the purpose of this document, we will use a local tables/ directory and assume that we are computing intensities for all available model atmospheres:
mkdir tables
cd tables
wget http://phoebe-project.org/static/atms/ck2004.tgz
wget http://phoebe-project.org/static/atms/phoenix.tgz
<!-- wget http://phoebe-project.org/static/atms/tmap.tgz -->
Once the data are downloaded, unpack the archives:
tar xvzf ck2004.tgz
tar xvzf phoenix.tgz
<!-- tar xvzf tmap.tgz -->
That should leave you with the following directory structure:
tables
|____ck2004
| |____TxxxxxGxxPxx.fits (3800 files)
|____phoenix
| |____ltexxxxx-x.xx-x.x.PHOENIX-ACES-AGSS-COND-SPECINT-2011.fits (7260 files)
I don't care about the details, just show/remind me how it's done
Makes sense, and we don't judge: you want to get to science. Provided that you have the passband transmission file available and the atmosphere tables already downloaded, the sequence that will generate/register a new passband is:
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger(clevel='WARNING')
Explanation: Getting started
Let us start by importing phoebe, numpy and matplotlib:
End of explanation
wl = np.linspace(300, 360, 61)
ptf = np.zeros(len(wl))
ptf[(wl>=320) & (wl<=340)] = 1.0
Explanation: Passband transmission function
The passband transmission function is typically a user-provided two-column file. The first column is wavelength, and the second column is passband transmission. For the purposes of this tutorial, we will simulate the passband as a uniform box.
End of explanation
plt.xlabel('Wavelength [nm]')
plt.ylabel('Passband transmission')
plt.plot(wl, ptf, 'b-')
plt.show()
Explanation: Let us plot this mock passband transmission function to see what it looks like:
End of explanation
np.savetxt('my_passband.ptf', np.vstack((wl, ptf)).T)
Explanation: Let us now save these data in a file that we will use to register a new passband.
End of explanation
pb = phoebe.atmospheres.passbands.Passband(
ptf='my_passband.ptf',
pbset='Custom',
pbname='mypb',
effwl=330.,
wlunits=u.nm,
calibrated=True,
reference='A completely made-up passband published in Nowhere (2017)',
version=1.0,
comments='This is my first custom passband')
Explanation: Registering a passband
The first step in introducing a new passband into PHOEBE is registering it with the system. We use the Passband class for that.
End of explanation
pb.content
Explanation: The first argument, ptf, is the passband transmission file we just created. Of course, you would provide an actual passband transmission function that comes from a respectable source rather than this silly tutorial.
The next two arguments, pbset and pbname, should be taken in unison. The way PHOEBE refers to passbands is a pbset:pbname string, for example Johnson:V, Cousins:Rc, etc. Thus, our fake passband will be Custom:mypb.
The following two arguments, effwl and wlunits, also come as a pair. PHOEBE uses effective wavelength to apply zero-level passband corrections when better options (such as model atmospheres) are unavailable. Effective wavelength is a transmission-weighted average wavelength in the units given by wlunits.
The calibrated parameter instructs PHOEBE whether to take the transmission function as calibrated, i.e. the flux through the passband is absolutely calibrated. If set to True, PHOEBE will assume that absolute intensities computed using the passband transmission function do not need further calibration. If False, the intensities are considered as scaled rather than absolute, i.e. correct to a scaling constant. Most modern passbands provided in the recent literature are calibrated.
The reference parameter holds a reference string to the literature from which the transmission function was taken from. It is common that updated transmission functions become available, which is the point of the version parameter. If there are multiple versions of the transmission function, PHOEBE will by default take the largest value, or the value that is explicitly requested in the filter string, i.e. Johnson:V:1.0 or Johnson:V:2.0.
Finally, the comments parameter is a convenience parameter to store any additional pertinent information.
Computing blackbody response
To significantly speed up calculations, passband intensities are stored in lookup tables instead of computing them over and over again on the fly. Computed passband tables are tagged in the content property of the class:
End of explanation
pb.compute_blackbody_response()
Explanation: Since we have not computed any tables yet, the list is empty for now. Blackbody functions for computing the lookup tables are built into PHOEBE and you do not need any auxiliary files to generate them. The lookup tables are defined for effective temperatures between 300K and 500,000K. To compute the blackbody response, issue:
End of explanation
pb.content
Explanation: Checking the content property again shows that the table has been successfully computed:
End of explanation
pb.Inorm(Teff=5772, atm='blackbody', ld_func='linear', ld_coeffs=[0.0])
Explanation: We can now test-drive the blackbody lookup table we just created. For this we will use a low-level class method that computes normal emergent passband intensity, Inorm(). For the sake of simplicity, we will turn off limb darkening by setting ld_func to 'linear' and ld_coeffs to '[0.0]':
End of explanation
jV = phoebe.get_passband('Johnson:V')
teffs = np.linspace(5000, 8000, 100)
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='mypb')
plt.plot(teffs, jV.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='jV')
plt.legend(loc='lower right')
plt.show()
Explanation: Let us now plot a range of temperatures, to make sure that normal emergent passband intensities do what they are supposed to do. While at it, let us compare what we get for the Johnson:V passband.
End of explanation
pb.compute_ck2004_response(path='tables/ck2004', verbose=False)
Explanation: This makes perfect sense: Johnson V transmission function is wider than our boxed transmission function, so intensity in the V band is larger the lower temperatures. However, for the hotter temperatures the contribution to the UV flux increases and our box passband with a perfect transmission of 1 takes over.
Computing Castelli & Kurucz (2004) response
For any real science you will want to generate model atmosphere tables. The default choice in PHOEBE are the models computed by Fiorella Castelli and Bob Kurucz (website, paper) that feature new opacity distribution functions. In principle, you can generate PHOEBE-compatible tables for any model atmospheres, but that would require a bit of book-keeping legwork in the PHOEBE backend. Contact us to discuss an extension to other model atmospheres.
To compute Castelli & Kurucz (2004) passband tables, we will use the previously downloaded model atmospheres. We start with the ck2004 normal intensities:
End of explanation
pb.content
Explanation: Note, of course, that you will need to change the path to point to the directory where you unpacked the ck2004 tables. The verbosity parameter verbose will report on the progress as computation is being done. Depending on your computer speed, this step will take up to a minute to complete. We can now check the passband's content attribute again:
End of explanation
loggs = np.ones(len(teffs))*4.43
abuns = np.zeros(len(teffs))
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='blackbody')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004')
plt.legend(loc='lower right')
plt.show()
Explanation: Let us now use the same low-level function as before to compare normal emergent passband intensity for our custom passband for blackbody and ck2004 model atmospheres. One other complication is that, unlike blackbody model that depends only on the temperature, the ck2004 model depends on surface gravity (log g) and heavy metal abundances as well, so we need to pass those arrays.
End of explanation
pb.compute_ck2004_intensities(path='tables/ck2004', verbose=False)
Explanation: Quite a difference. That is why using model atmospheres is superior when accuracy is of importance. Next, we need to compute direction-dependent intensities for all our limb darkening and boosting needs. This is a step that takes a long time; depending on your computer speed, it can take a few minutes to complete.
End of explanation
pb.compute_ck2004_ldcoeffs()
pb.compute_ck2004_ldints()
Explanation: This step will allow PHOEBE to compute all direction-dependent intensities on the fly, including the interpolation of the limb darkening coefficients that is model-independent. When limb darkening models are preferred (for example, when you don't quite trust direction-dependent intensities from the model atmosphere), we need to calculate two more tables: one for limb darkening coefficients and the other for the integrated limb darkening. That is done by two methods that can take a couple of minutes to complete:
End of explanation
pb.compute_phoenix_response(path='tables/phoenix', verbose=False)
pb.compute_phoenix_intensities(path='tables/phoenix', verbose=False)
pb.compute_phoenix_ldcoeffs()
pb.compute_phoenix_ldints()
print(pb.content)
Explanation: This completes the computation of Castelli & Kurucz auxiliary tables.
Computing PHOENIX response
PHOENIX is a 3-D model atmosphere code. Because of that, it is more complex and better behaved for cooler stars (down to ~2300K). The steps to compute PHOENIX intensity tables are analogous to the ones we used for ck2004; so we can do all of them in a single step:
End of explanation
pb.impute_atmosphere_grid(pb._phoenix_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_photon_grid)
for i in range(len(pb._phoenix_intensity_axes[3])):
pb.impute_atmosphere_grid(pb._phoenix_Imu_energy_grid[:,:,:,i,:])
pb.impute_atmosphere_grid(pb._phoenix_Imu_photon_grid[:,:,:,i,:])
Explanation: There is one extra step that we need to do for phoenix atmospheres: because there are gaps in the coverage of atmospheric parameters, we need to impute those values in order to allow for seamless interpolation. This is achieved by the call to impute_atmosphere_grid(). It is a computationally intensive step that can take 10+ minutes.
End of explanation
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='blackbody')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='phoenix', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='phoenix')
plt.legend(loc='lower right')
plt.show()
Explanation: Now we can compare all three model atmospheres:
End of explanation
pb.import_wd_atmcof('atmcofplanck.dat', 'atmcof.dat', 22)
Explanation: We see that, as temperature increases, model atmosphere intensities can differ quite a bit. That explains why the choice of a model atmosphere is quite important and should be given proper consideration.
Importing Wilson-Devinney response
PHOEBE no longer shares any codebase with the WD code, but for comparison purposes it is sometimes useful to use the same atmosphere tables. If the passband you are registering with PHOEBE has been defined in WD's atmcof.dat and atmcofplanck.dat files, PHOEBE can import those coefficients and use them to compute intensities.
To import a set of WD atmospheric coefficients, you need to know the corresponding index of the passband (you can look it up in the WD user manual available at ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/ebdoc2003.2feb2004.pdf.gz) and you need to grab the files ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/atmcofplanck.dat.gz and ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/atmcof.dat.gz from Bob Wilson's webpage. For this particular passband the index is 22. To import, issue:
End of explanation
pb.content
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='blackbody')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='phoenix', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='phoenix')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='extern_atmx', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='wd_atmx')
plt.legend(loc='lower right')
plt.show()
Explanation: We can consult the content attribute to see the entire set of supported tables, and plot different atmosphere models for comparison purposes:
End of explanation
pb.save('~/.phoebe/atmospheres/tables/passbands/my_passband.fits')
Explanation: Still an appreciable difference.
Saving the passband table
The final step of all this (computer's) hard work is to save the passband file so that these steps do not need to be ever repeated. From now on you will be able to load the passband file explicitly and PHOEBE will have full access to all of its tables. Your new passband will be identified as 'Custom:mypb'.
To make PHOEBE automatically load the passband, it needs to be added to one of the passband directories that PHOEBE recognizes. If there are no proprietary aspects that hinder the dissemination of the tables, please consider contributing them to PHOEBE so that other users can use them.
End of explanation |
8,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-aerchem', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: EMAC-2-53-AERCHEM
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
8,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hyperparameters and Model Validation
Previously, we saw the basic recipe for applying a supervised machine learning model
Step1: Next we choose a model and hyperparameters
Step2: Then we train the model, and use it to predict labels for data we already know
Step3: Finally, we compute the fraction of correctly labeled points
Step4: We see an accuracy score of 1.0, which indicates that 100% of points were correctly labeled by our model!
But is this truly measuring the expected accuracy?
Have we really come upon a model that we expect to be correct 100% of the time?
Model validation the right way
Step5: The nearest-neighbor classifier is about 90% accurate on this hold-out set, which is more inline with out expactation.
The hold-out set is similar to unknown data, because the model has not "seen" it before.
But, we have lost a portion of our data to the model training.
In the above case, half the dataset does not contribute to the training of the model! This is not optimal.
Model validation via cross-validation
One way to address this is to use cross-validation; that is, to do a sequence of fits where each subset of the data is used both as a training set and as a validation set.
Visually, it might look something like this
Step6: We could compute the mean of the two accuracy scores to get a better measure of the global model performance.
This particular form of cross-validation is a two-fold cross-validation.
We could expand on this idea to use even more trials, and more folds.
Here is a visual depiction of five-fold cross-validation
Step7: This gives us an even better idea of the performance of the algorithm.
How could we take this idea to its extreme?
The case in which our number of folds is equal to the number of data points.
This type of cross-validation is known as leave-one-out cross validation, and can be used as follows
Step8: Because we have 150 samples, the leave one out cross-validation yields scores for 150 trials, and the score indicates either successful (1.0) or unsuccessful (0.0) prediction.
Taking the mean of these gives an estimate of the error rate
Step9: This gives us a good impression on the performance of our model. But there is also a problem. Can you spot it?
Selecting the Best Model
Now that we can evaluate a model's performance, we will tackle the question how to select a model and its hyperparameters.
A very important question to ask is
Step10: Now let's create some data to which we will fit our model
Step11: We can now visualize our data, along with polynomial fits of several degrees
Step12: We can controll the model complexity (the degree of the polynomial), which can be any non-negative integer.
A useful question to answer is this
Step13: This shows precisely the behavior we expect
Step14: Learning Curves
One important aspect of model complexity is that the optimal model will generally depend on the size of your training data.
For example, let's generate a new five times larger dataset
Step15: We will duplicate the preceding code to plot the validation curve for this larger dataset; for reference let's over-plot the previous results as well
Step16: From the validation curve it is clear that the larger dataset can support a much more complicated model
Step17: This plot gives us a visual depiction of how our model responds to increasing training data.
When the learning curve has already converged (i.e., when the training and validation curves are already close to each other) adding more training data will not significantly improve the fit!.
This situation is seen in the left panel, with the learning curve for the degree-2 model.
To increase the converged score a more complicated model must be used.
In the right panel
Step18: Notice that like a normal estimator, this has not yet been applied to any data.
Calling the fit() method will fit the model at each grid point, keeping track of the scores along the way
Step19: Now that this is fit, we can ask for the best parameters as follows
Step20: Finally, if we wish, we can use the best model and show the fit to our data using code from before | Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
Explanation: Hyperparameters and Model Validation
Previously, we saw the basic recipe for applying a supervised machine learning model:
Choose a class of model
Choose model hyperparameters
Fit the model to the training data
Use the model to predict labels for new data
The first two pieces of this are the most important part of using these tools and techniques effectively.
The question that comes up is: How can we make informed choice for these parameters?
We've touched upon questions from this realm already, but here we are going to examine it in more detail.
Thinking about Model Validation
In principle, model validation is very simple:
* choosing a model and its hyperparameters
* Estimation: applying it to some of the training data and comparing the prediction to the known value.
Let's first go through a naive approach and see why it fails.
Model validation the wrong way
Let's demonstrate the naive approach to validation using the Iris data, which we saw previously:
End of explanation
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=1)
Explanation: Next we choose a model and hyperparameters:
End of explanation
model.fit(X, y)
y_model = model.predict(X)
Explanation: Then we train the model, and use it to predict labels for data we already know:
End of explanation
from sklearn.metrics import accuracy_score
accuracy_score(y, y_model)
Explanation: Finally, we compute the fraction of correctly labeled points:
End of explanation
from sklearn.model_selection import train_test_split
# split the data with 50% in each set
X1, X2, y1, y2 = train_test_split(X, y, random_state=0, train_size=0.5)
# fit the model on one set of data
model.fit(X1, y1)
# evaluate the model on the second set of data
y2_model = model.predict(X2)
accuracy_score(y2, y2_model)
Explanation: We see an accuracy score of 1.0, which indicates that 100% of points were correctly labeled by our model!
But is this truly measuring the expected accuracy?
Have we really come upon a model that we expect to be correct 100% of the time?
Model validation the right way: Holdout sets
We need to use a holdout set: that is, we hold back some subset of the data from the training of the model, and then use this holdout set to check the model performance.
This splitting can be done using the train_test_split utility in Scikit-Learn:
End of explanation
y2_model = model.fit(X1, y1).predict(X2)
y1_model = model.fit(X2, y2).predict(X1)
accuracy_score(y1, y1_model), accuracy_score(y2, y2_model)
Explanation: The nearest-neighbor classifier is about 90% accurate on this hold-out set, which is more inline with out expactation.
The hold-out set is similar to unknown data, because the model has not "seen" it before.
But, we have lost a portion of our data to the model training.
In the above case, half the dataset does not contribute to the training of the model! This is not optimal.
Model validation via cross-validation
One way to address this is to use cross-validation; that is, to do a sequence of fits where each subset of the data is used both as a training set and as a validation set.
Visually, it might look something like this:
Here we do two validation trials, alternately using each half of the data as a holdout set.
Using the split data from before, we could implement it like this:
End of explanation
from sklearn.model_selection import cross_val_score
cross_val_score(model, X, y, cv=5)
Explanation: We could compute the mean of the two accuracy scores to get a better measure of the global model performance.
This particular form of cross-validation is a two-fold cross-validation.
We could expand on this idea to use even more trials, and more folds.
Here is a visual depiction of five-fold cross-validation:
We split the data into five groups
Each of them is used to evaluate the model fit on the other 4/5 of the data.
What is the advantage of higher-degree crossvalidation? What is the drawback?
This would be rather tedious to do by hand, and so we can use Scikit-Learn's cross_val_score convenience routine to do it succinctly:
End of explanation
from sklearn.model_selection import LeaveOneOut
import numpy as np
loo = LeaveOneOut()
loo.get_n_splits(X)
scores = []
for train_index, test_index in loo.split(X):
model.fit(X[train_index], y[train_index])
scores.append(accuracy_score(y[test_index], model.predict(X[test_index])))
scores = np.array(scores)
scores
Explanation: This gives us an even better idea of the performance of the algorithm.
How could we take this idea to its extreme?
The case in which our number of folds is equal to the number of data points.
This type of cross-validation is known as leave-one-out cross validation, and can be used as follows:
End of explanation
scores.mean()
Explanation: Because we have 150 samples, the leave one out cross-validation yields scores for 150 trials, and the score indicates either successful (1.0) or unsuccessful (0.0) prediction.
Taking the mean of these gives an estimate of the error rate:
End of explanation
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree), LinearRegression(**kwargs))
Explanation: This gives us a good impression on the performance of our model. But there is also a problem. Can you spot it?
Selecting the Best Model
Now that we can evaluate a model's performance, we will tackle the question how to select a model and its hyperparameters.
A very important question to ask is: If my estimator is underperforming, how should I move forward? What are the options?
Use a more complicated/more flexible model
Use a less complicated/less flexible model
Gather more training samples
Gather more data to add features to each sample
Sometimes the results are counter-intuitive:
* Using a more complicated model will give worse results
* Adding more training samples may not improve your results
The Bias-variance trade-off
Fundamentally, the question of "the best model" is about finding a sweet spot in the tradeoff between bias and variance.
Consider the following figure, which presents two regression fits to the same dataset:
Comment on the models.
Which model is better?
Is either of the models 'good'? Why?
Consider what happens if we use these two models to predict the y-value for some new data.
In the following diagrams, the red/lighter points indicate data that is omitted from the training set:
It is clear that neither of these models is a particularly good fit to the data, but they fail in different ways.
The score here is the $R^2$ score, or coefficient of determination, which measures how well a model performs relative to a simple mean of the target values.
* $R^2=1$ indicates a perfect match
* $R^2=0$ indicates the model does no better than simply taking the mean of the data
* Negative values mean even worse models.
Left model
Attempts to find a straight-line fit through the data.
The data are intrinsically more complicated than a straight line, the straight-line model will never be able to describe this dataset well.
Such a model is said to underfit the data: that is, it does not have enough model flexibility to suitably account for all the features in the data
Another way of saying this is that the model has high bias
Right model
Attempts to fit a high-order polynomial through the data.
The model fit has enough flexibility to nearly perfectly account for the fine features in the data
It very accurately describes the training data
Its precise form seems to be more reflective of the noise properties rather than the intrinsic properties of whatever process generated that data
Such a model is said to overfit the data: that is, it has so much model flexibility that the model ends up accounting for random errors as well as the underlying data distribution
Another way of saying this is that the model has high variance.
From the scores associated with these two models, we can make an observation that holds more generally:
For high-bias models, the performance of the model on the validation set is similar to the performance on the training set, but the overall score is low.
For high-variance models, the performance of the model on the validation set is far worse than the performance on the training set.
Imagine we have the ability to tune the model complexity, then we can expect the training score and validation score to behave as illustrated in the following figure:
The diagram shown here is often called a validation curve, and we see the following essential features:
The training score is everywhere higher than the validation score. This is generally the case. Why?
For very low model complexity (a high-bias model), the training data is under-fit, which means that the model is a poor predictor both for the training data and for any previously unseen data.
For very high model complexity (a high-variance model), the training data is over-fit, which means that the model predicts the training data very well, but fails for any previously unseen data.
Which model/level of complexity should we choose?.
Validation curves in Scikit-Learn
Let's look at an example. We will use a polynomial regression model: this is a generalized linear model in which the degree of the polynomial is a tunable parameter.
For example, a degree-1 polynomial fits a straight line to the data; for model parameters $a$ and $b$:
$$
y = ax + b
$$
A degree-3 polynomial fits a cubic curve to the data; for model parameters $a, b, c, d$:
$$
y = ax^3 + bx^2 + cx + d
$$
We can generalize this to any number of polynomial features.
In Scikit-Learn, we can implement this with a simple linear regression combined with the polynomial preprocessor.
We will use a pipeline to string these operations together:
End of explanation
import numpy as np
def make_data(N, err=1.0, rseed=1):
# randomly sample the data
rng = np.random.RandomState(rseed)
X = rng.rand(N, 1) ** 2
y = 10 - 1. / (X.ravel() + 0.1)
if err > 0:
y += err * rng.randn(N)
return X, y
X, y = make_data(40)
Explanation: Now let's create some data to which we will fit our model:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set() # for beautiful plotting
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
plt.scatter(X.ravel(), y, color='black') # plot data
axis = plt.axis()
# plot ploynomials
for degree in [1, 3, 5]:
y_test = PolynomialRegression(degree).fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test, label='degree={0}'.format(degree))
plt.xlim(-0.1, 1.0)
plt.ylim(-2, 12)
plt.legend(loc='best');
Explanation: We can now visualize our data, along with polynomial fits of several degrees:
End of explanation
from sklearn.model_selection import validation_curve
degree = np.arange(0, 21)
train_score, val_score = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree', degree, cv=10)
plt.plot(degree, np.median(train_score, 1), color='blue', label='training score')
plt.plot(degree, np.median(val_score, 1), color='red', label='validation score')
plt.legend(loc='best')
plt.ylim(0, 1)
plt.xlabel('degree')
plt.ylabel('score');
Explanation: We can controll the model complexity (the degree of the polynomial), which can be any non-negative integer.
A useful question to answer is this: what degree of polynomial provides a suitable trade-off between bias (under-fitting) and variance (over-fitting)?
We can make progress in this by visualizing the validation curve
This can be done straightforwardly using the validation_curve convenience routine
Given a model, data, parameter name, and a range to explore, this function will automatically compute both the training score and validation score across the range:
End of explanation
plt.scatter(X.ravel(), y)
lim = plt.axis()
y_test = PolynomialRegression(3).fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test);
plt.axis(lim);
Explanation: This shows precisely the behavior we expect:
* The training score is everywhere higher than the validation score
* The training score is monotonically improving with increased model complexity
* The validation score reaches a maximum before dropping off as the model becomes over-fit.
Which degree ist optimal?
Let's plot this.
End of explanation
X2, y2 = make_data(200)
plt.scatter(X2.ravel(), y2);
Explanation: Learning Curves
One important aspect of model complexity is that the optimal model will generally depend on the size of your training data.
For example, let's generate a new five times larger dataset:
End of explanation
degree = np.arange(51)
train_score2, val_score2 = validation_curve(PolynomialRegression(), X2, y2,
'polynomialfeatures__degree', degree, cv=10)
plt.plot(degree, np.median(train_score2, 1), color='blue', label='training score')
plt.plot(degree, np.median(val_score2, 1), color='red', label='validation score')
plt.plot(degree[:train_score.shape[0]], np.median(train_score, 1), color='blue', alpha=0.3, linestyle='dashed')
plt.plot(degree[:train_score.shape[0]], np.median(val_score, 1), color='red', alpha=0.3, linestyle='dashed')
plt.legend(loc='best')
plt.ylim(0, 1)
plt.xlabel('degree')
plt.ylabel('score');
Explanation: We will duplicate the preceding code to plot the validation curve for this larger dataset; for reference let's over-plot the previous results as well:
End of explanation
from sklearn.model_selection import learning_curve
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for i, degree in enumerate([2, 9]):
N, train_lc, val_lc = learning_curve(PolynomialRegression(degree),
X, y, cv=10,
train_sizes=np.linspace(0.3, 1, 25))
ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score')
ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score')
ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
color='gray', linestyle='dashed')
ax[i].set_ylim(0, 1)
ax[i].set_xlim(N[0], N[-1])
ax[i].set_xlabel('training size')
ax[i].set_ylabel('score')
ax[i].set_title('degree = {0}'.format(degree), size=14)
ax[i].legend(loc='best')
Explanation: From the validation curve it is clear that the larger dataset can support a much more complicated model:
* The peak here is probably around a degree of 6
* Even a degree-20 model is not seriously overfitting the data
Thus we see that the behavior of the validation curve has not one but two important inputs:
* the model complexity.
* the number of training points.
It is often useful to explore the behavior of the model as a function of the number of training points.
This can be done by using increasingly larger subsets of the data to fit our model.
This is called a learning curve.
The general behavior of learning curves is this:
* A model of a given complexity will overfit a small dataset: this means the training score will be relatively high, while the validation score will be relatively low.
* A model of a given complexity will underfit a large dataset: this means that the training score will decrease, but the validation score will increase.
* A model will never, except by chance, give a better score to the validation set than the training set: this means the curves should keep getting closer together but never cross.
With these features in mind, we would expect a learning curve to look qualitatively like that shown in the following figure:
The Learning curve converges to a particular score as the number of training samples grows
Once we have enough training data a particular model has converged
That means: adding more training data will not help!
To increase the model performance another (often more complex) model must be chosen
Learning curves in Scikit-Learn
Scikit-Learn offers a convenient utility for computing such learning curves:
End of explanation
from sklearn.model_selection import GridSearchCV
param_grid = {'polynomialfeatures__degree': np.arange(21),
'linearregression__fit_intercept': [True, False],
'linearregression__normalize': [True, False]}
grid = GridSearchCV(PolynomialRegression(), param_grid, cv=10)
Explanation: This plot gives us a visual depiction of how our model responds to increasing training data.
When the learning curve has already converged (i.e., when the training and validation curves are already close to each other) adding more training data will not significantly improve the fit!.
This situation is seen in the left panel, with the learning curve for the degree-2 model.
To increase the converged score a more complicated model must be used.
In the right panel: by moving to a much more complicated model, we increase the score of convergence
The drawback is a higher model variance (indicated by the difference between the training and validation scores).
If we were to add even more data points, the learning curve for the more complicated model would eventually converge.
Plotting a learning curve for your particular choice of model and dataset can help you to make this type of decision about how to move forward in improving your analysis.
What is the difference between a validation curve and a learning curve?
Validation in Practice: Grid Search
The preceding discussion gave you some intuition into the trade-off between bias and variance, and its dependence on model complexity and training set size.
In practice, models generally have more than one knob to turn, and thus plots of validation and learning curves change from lines to multi-dimensional surfaces.
Such visualizations are difficult.
We would like to find the particular model that maximizes the validation score directly, instead.
Scikit-Learn provides automated tools to do this in the grid search module.
Here is an example of using grid search to find the optimal polynomial model.
We will explore a three-dimensional grid of model features:
* Polynomial degree
* The flag telling us whether to fit the intercept
* The flag telling us whether to normalize the problem.
This can be set up using Scikit-Learn's GridSearchCV meta-estimator:
End of explanation
grid.fit(X, y);
Explanation: Notice that like a normal estimator, this has not yet been applied to any data.
Calling the fit() method will fit the model at each grid point, keeping track of the scores along the way:
End of explanation
grid.best_params_
Explanation: Now that this is fit, we can ask for the best parameters as follows:
End of explanation
model = grid.best_estimator_
plt.scatter(X.ravel(), y)
lim = plt.axis()
y_test = model.fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test);
plt.axis(lim);
Explanation: Finally, if we wish, we can use the best model and show the fit to our data using code from before:
End of explanation |
8,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualising Clustering with Voronoi Tesselations
When experimenting with using the Voronoi Tesselation to identify which machines are picked up by certain points, it was easy to extend the idea to visualising clustering through a voronoi.
Using the voronoi_finite_polygons_2d method from pycobra.visualisation, it's easy to do this
Step1: Let's make some blobs so clustering is easy.
Step2: We set up a few scikit-learn clustering machines which we'd like to visualise the results of.
Step3: Helper function to implement the Voronoi. | Python Code:
%matplotlib inline
import numpy as np
from pycobra.cobra import Cobra
from pycobra.visualisation import Visualisation
from pycobra.diagnostics import Diagnostics
import matplotlib.pyplot as plt
from sklearn import cluster
Explanation: Visualising Clustering with Voronoi Tesselations
When experimenting with using the Voronoi Tesselation to identify which machines are picked up by certain points, it was easy to extend the idea to visualising clustering through a voronoi.
Using the voronoi_finite_polygons_2d method from pycobra.visualisation, it's easy to do this
End of explanation
from sklearn.datasets.samples_generator import make_blobs
X, Y = make_blobs(n_samples=200, centers=2, n_features=2)
Y = np.power(X[:,0], 2) + np.power(X[:,1], 2)
Explanation: Let's make some blobs so clustering is easy.
End of explanation
two_means = cluster.KMeans(n_clusters=2)
spectral = cluster.SpectralClustering(n_clusters=2, eigen_solver='arpack', affinity="nearest_neighbors")
dbscan = cluster.DBSCAN(eps=.6)
affinity_propagation = cluster.AffinityPropagation(damping=.9, preference=-200)
birch = cluster.Birch(n_clusters=2)
from pycobra.visualisation import voronoi_finite_polygons_2d
from scipy.spatial import Voronoi, voronoi_plot_2d
Explanation: We set up a few scikit-learn clustering machines which we'd like to visualise the results of.
End of explanation
def plot_cluster_voronoi(data, algo):
# passing input space to set up voronoi regions.
points = np.hstack((np.reshape(data[:,0], (len(data[:,0]), 1)), np.reshape(data[:,1], (len(data[:,1]), 1))))
vor = Voronoi(points)
# use helper Voronoi
regions, vertices = voronoi_finite_polygons_2d(vor)
fig, ax = plt.subplots()
plot = ax.scatter([], [])
indice = 0
for region in regions:
ax.plot(data[:,0][indice], data[:,1][indice], 'ko')
polygon = vertices[region]
# if it isn't gradient based we just color red or blue depending on whether that point uses the machine in question
color = algo.labels_[indice]
# we assume only two
if color == 0:
color = 'r'
else:
color = 'b'
ax.fill(*zip(*polygon), alpha=0.4, color=color, label="")
indice += 1
ax.axis('equal')
plt.xlim(vor.min_bound[0] - 0.1, vor.max_bound[0] + 0.1)
plt.ylim(vor.min_bound[1] - 0.1, vor.max_bound[1] + 0.1)
two_means.fit(X)
plot_cluster_voronoi(X, two_means)
dbscan.fit(X)
plot_cluster_voronoi(X, dbscan)
spectral.fit(X)
plot_cluster_voronoi(X, spectral)
affinity_propagation.fit(X)
plot_cluster_voronoi(X, affinity_propagation)
birch.fit(X)
plot_cluster_voronoi(X, birch)
Explanation: Helper function to implement the Voronoi.
End of explanation |
8,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Late night 1 hour hack of the freshly released dataset on train time tables by IRCTC.
Source
Step1: Distribution of Arrival and Departure Times
Lets analyze the arrival and departure time distributions. As we can see from the plots below, both the times follow as similar distribution. What is interesting is that a majority of the trains arrive during the night (which is good as Indians love to travel during night).
Step2: It would also be interesting to find out the distribution of the stoppage time at a station.
$Stoppage_time = Departure_time - Arrival_time$
Step3: This looks wierd. Stoppage time cannot be negative or more than 500 minutes (~8 hours). Let us remove these outlires and plot our distributions again.
Step4: This is better but still appears that most stoppage times are less than 30 minutes. So let us plot again in that range.
Step5: This is more informative. We see that most stoppage times are either 1 or 2 minutes or a multiple of 5 minutes. Makes a lot of sense. Now let us look filter the data to make it consist of the stoppage time in this range.
Step6: Aah, it looks like less trains arrive and depart during lunch hours around 1200-1500 Hours. Looks wierd but can also point to the fact that many trains run at night and travel short distances. This makes me think that we should look closely at the total distance per train.
Distance analysis
Lets now analyze the total distance travelled by a train. This can be easily found by using the last value for each train.
Step7: Train specific analysis
Ok this is insteresting.
We observe that majority of the trains cover 15-25 stations.
We also see that many trains are short distance trains travelling only 500-700 Kilometers.
Arrival time for many trains at their last stop is mostly during morning 0500 to afternoon 1300 hours and also a lot around midnight.
Departure time for a majority of the trains is actually mostly during night.
Now the question is
Step8: The regression plot shows that we cannot draw any conclusion regarding the relation between number of stopns and distance. We do see that low stops mean small distances but for larger distances we observe that this condition doesn't hold true. This can be attributed to the availability of both express as well as passenger trains for longer distances.
Step9: We observe that 50% of the trains travel less than 810 Km as well as have less than 20 stops. Maximum distance travelled by a train is 4273 Km and maximum stoppages are 128, both of which are very high numbers.
Analysis of Stations
Let us look at which stations are popular.
Step10: Looks like Vijaywada is the station where maximum trains have a stoppage. I am upset not to see my place Allahabad in the top 20 list. Neverthless, let us plot the distribution of these stoppages.
Step11: Looks like very few stations have a high volume of trains stopping. Most stations see close to 5 trains.
Let us now look at some train statistics like | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# Load the data into a dataframe
df = pd.read_csv("data/isl_wise_train_detail_03082015_v1.csv")
sns.set_context("poster")
# Show some rows
df.head()
df.columns
# Convert time columns to datetime objects
df[u'Arrival time'] = pd.to_datetime(df[u'Arrival time'])
df[u'Departure time'] = pd.to_datetime(df[u'Departure time'])
df.head()
Explanation: Late night 1 hour hack of the freshly released dataset on train time tables by IRCTC.
Source: https://data.gov.in/catalog/indian-railways-train-time-table-0#web_catalog_tabs_block_10
End of explanation
fig, ax = plt.subplots(1,2, sharey=True)
df[u'Arrival time'].map(lambda x: x.hour).hist(ax=ax[0], bins=24)
df[u'Departure time'].map(lambda x: x.hour).hist(ax=ax[1], bins=24)
ax[0].set_xlabel("Arrival Time")
ax[1].set_xlabel("Departure Time")
Explanation: Distribution of Arrival and Departure Times
Lets analyze the arrival and departure time distributions. As we can see from the plots below, both the times follow as similar distribution. What is interesting is that a majority of the trains arrive during the night (which is good as Indians love to travel during night).
End of explanation
df["Stoppage"] = (df[u'Departure time'] - df[u'Arrival time']).astype('timedelta64[m]') # Find stoppage time in minutes
# Plot distribution of stoppage time
df["Stoppage"].hist()
plt.xlabel("Stoppage Time")
Explanation: It would also be interesting to find out the distribution of the stoppage time at a station.
$Stoppage_time = Departure_time - Arrival_time$
End of explanation
df["Stoppage"][(df["Stoppage"]> 0) & (df["Stoppage"] < 61)].hist() # Let us take that max stoppage time can be an hour.
plt.xlabel("Stoppage Time")
Explanation: This looks wierd. Stoppage time cannot be negative or more than 500 minutes (~8 hours). Let us remove these outlires and plot our distributions again.
End of explanation
df["Stoppage"][(df["Stoppage"]> 0) & (df["Stoppage"] < 31)].hist(bins=30) # Let us take that max stoppage time can be an hour.
plt.xlabel("Stoppage Time")
Explanation: This is better but still appears that most stoppage times are less than 30 minutes. So let us plot again in that range.
End of explanation
df_stoppage_30 = df[(df["Stoppage"]> 0) & (df["Stoppage"] < 31)] # Filter data between nice stoppage times
# Plot data for this stoppage time range.
fig, ax = plt.subplots(1,2, sharey=True)
df_stoppage_30[u'Arrival time'].map(lambda x: x.hour).hist(ax=ax[0], bins=24)
df_stoppage_30[u'Departure time'].map(lambda x: x.hour).hist(ax=ax[1], bins=24)
ax[0].set_xlabel("Arrival Time")
ax[1].set_xlabel("Departure Time")
Explanation: This is more informative. We see that most stoppage times are either 1 or 2 minutes or a multiple of 5 minutes. Makes a lot of sense. Now let us look filter the data to make it consist of the stoppage time in this range.
End of explanation
# Total Number of stations of the train, last arrival time, first departure time, last distance, first station and last station.
df_train_dist = df[[u'Train No.', u'station Code', u'Arrival time', u'Departure time',
u'Distance', u'Source Station Code', u'Destination station Code']]\
.groupby(u'Train No.').agg({u'station Code': "count", u'Arrival time': "last",
u'Departure time': "first", u'Distance': "last",
u'Source Station Code': "first", u'Destination station Code': "last"})
df_train_dist.head()
# Let us plot the distribution of the distances as well as station codes, as well as arrival and departure times
fig, ax = plt.subplots(2,2)
df_train_dist[u'station Code'].hist(ax=ax[0][0], bins=range(df_train_dist[u'station Code'].max() + 1))
df_train_dist[u'Distance'].hist(ax=ax[0][1], bins=50)
ax[1][0].set_xlabel("Total Stations stopped")
ax[1][1].set_xlabel("Total Distance covered")
df_train_dist[u'Arrival time'].map(lambda x: x.hour).hist(ax=ax[1][0], bins=range(24))
df_train_dist[u'Departure time'].map(lambda x: x.hour).hist(ax=ax[1][1], bins=range(24))
ax[1][0].set_xlabel("Arrival Time")
ax[1][1].set_xlabel("Departure Time")
Explanation: Aah, it looks like less trains arrive and depart during lunch hours around 1200-1500 Hours. Looks wierd but can also point to the fact that many trains run at night and travel short distances. This makes me think that we should look closely at the total distance per train.
Distance analysis
Lets now analyze the total distance travelled by a train. This can be easily found by using the last value for each train.
End of explanation
sns.lmplot(x=u'station Code', y=u'Distance', data=df_train_dist, x_estimator=np.mean)
Explanation: Train specific analysis
Ok this is insteresting.
We observe that majority of the trains cover 15-25 stations.
We also see that many trains are short distance trains travelling only 500-700 Kilometers.
Arrival time for many trains at their last stop is mostly during morning 0500 to afternoon 1300 hours and also a lot around midnight.
Departure time for a majority of the trains is actually mostly during night.
Now the question is: Do trains on average having more stops run longer distance or not ? Let us try to answer this question.
End of explanation
# Lets us see what are some general statistics of the distances and the number of stops.
df_train_dist.describe()
Explanation: The regression plot shows that we cannot draw any conclusion regarding the relation between number of stopns and distance. We do see that low stops mean small distances but for larger distances we observe that this condition doesn't hold true. This can be attributed to the availability of both express as well as passenger trains for longer distances.
End of explanation
df[[u'Train No.', u'Station Name']].groupby(u'Station Name').count().sort(u'Train No.', ascending=False).head(20)
Explanation: We observe that 50% of the trains travel less than 810 Km as well as have less than 20 stops. Maximum distance travelled by a train is 4273 Km and maximum stoppages are 128, both of which are very high numbers.
Analysis of Stations
Let us look at which stations are popular.
End of explanation
df[[u'Train No.', u'Station Name']].groupby(u'Station Name').count().hist(bins=range(1,320,2), log=True)
plt.xlabel("Number of trains stopping")
plt.ylabel("Number of stations")
Explanation: Looks like Vijaywada is the station where maximum trains have a stoppage. I am upset not to see my place Allahabad in the top 20 list. Neverthless, let us plot the distribution of these stoppages.
End of explanation
df_train_dist.sort(u'station Code', ascending=False).head(10) # Top 10 trains with maximum number of stops
df_train_dist.sort(u'Distance', ascending=False).head(10) # Top 10 trains with maximum distance
fig, ax = plt.subplots(1,2)
sns.regplot(x=df_train_dist[u'Arrival time'].map(lambda x: x.hour), y=df_train_dist[u'Distance'], x_estimator=np.mean, ax=ax[0])
sns.regplot(x=df_train_dist[u'Departure time'].map(lambda x: x.hour), y=df_train_dist[u'Distance'], x_estimator=np.mean, ax=ax[1])
Explanation: Looks like very few stations have a high volume of trains stopping. Most stations see close to 5 trains.
Let us now look at some train statistics like:
Trains with maximum stops, I would personally avoid these trains.
Trains which travel maximum distance, if they take less stops I would prefer these.
End of explanation |
8,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notes for Machine Learning for Trading
Udacity - ud501
Part 1
Step1: You can download the csv files with the stock data in it from Yahoo Finance (Historical Data)
using your browser, the pandas_datareader or you build something yourself with requests.
Step5: Bollinger Bands
Bollinger Bands are a simple idea, you smooth out the stock data by computing the rolling mean (RM) of the dataset. The window should be 20 days. Now you need the rolling standard deviation (RSTD). All that is left is to add a band 2 * RSTD over and under your RM band.
Step7: Daily Returns
Step8: Histogram of Daily Returns
Step9: Scatterplot of Daily Return
This is useful to compare the daily return of V to SPY (S&P500).
Step10: Daily Portfolio Value
Step11: Portfolio Statistics | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from util import get_data, plot_data, fill_missing_values
%matplotlib inline
Explanation: Notes for Machine Learning for Trading
Udacity - ud501
Part 1
End of explanation
dates = pd.date_range('2014-01-01', '2014-12-31')
symbols = ['V']
df = get_data(symbols, dates) # will always add SPY (S&P500) for reference
fill_missing_values(df) # fill missing values (NaN)
plot_data(df) # plots the data
visa = df['V'] # extract the visa data from the dataframe
plot_data(visa, title="Visa stock prices")
Explanation: You can download the csv files with the stock data in it from Yahoo Finance (Historical Data)
using your browser, the pandas_datareader or you build something yourself with requests.
End of explanation
def r_mean(values, window=20):
Return rolling mean of given values, using specified window size.
return values.rolling(window, center=False).mean()
def r_std(values, window=20):
Return rolling mean of given values, using specified window size.
return values.rolling(window, center=False).std()
def bbands(rm, rstd):
Return upper and lower Bollinger Bands.
upper = rm + 2*rstd
lower = rm - 2*rstd
return (upper, lower)
rm = r_mean(visa)
rstd = r_std(visa)
(upper, lower) = bbands(rm, rstd)
ax = visa.plot(title='Bollinger Bands' + u'\N{REGISTERED SIGN}')
ax.set_ylabel('Price')
ax.set_xlabel('Date')
rm.plot(label='rolling mean', ax=ax)
upper.plot(label='upper band', ax=ax)
lower.plot(label='lower band', ax=ax)
Explanation: Bollinger Bands
Bollinger Bands are a simple idea, you smooth out the stock data by computing the rolling mean (RM) of the dataset. The window should be 20 days. Now you need the rolling standard deviation (RSTD). All that is left is to add a band 2 * RSTD over and under your RM band.
End of explanation
def get_daily_returns(df):
Compute and return the daily return values.
dr = df.copy()
dr[1:] = (df[1:] / df[:-1].values) - 1
#dr.ix[0, :] = 0 # set daily return for the first day to 0
dr.iloc[0] = 0
return dr
dr = get_daily_returns(df)
plot_data(dr['V'], title="Daily Returns", ylabel="Daily Returns")
Explanation: Daily Returns
End of explanation
m = dr['V'].mean()
std = dr['V'].std()
dr['V'].hist(bins=20)
plt.axvline(m, color='y', linestyle='dashed', linewidth=2)
plt.axvline(std, color='r', linestyle='dashed', linewidth=2)
plt.axvline(-std, color='r', linestyle='dashed', linewidth=2)
Explanation: Histogram of Daily Returns
End of explanation
dr.plot(kind='scatter', x='SPY', y='V')
beta, alpha = np.polyfit(dr['SPY'], dr['V'], 1)
print("beta: " + str(beta))
print("alpha: " + str(alpha))
plt.plot(dr['SPY'], beta*dr['SPY'] + alpha, '-', color='r')
Explanation: Scatterplot of Daily Return
This is useful to compare the daily return of V to SPY (S&P500).
End of explanation
start_val = 1000000
start_date = '2011-1-1'
end_date = '2011-12-31'
symbols = ['SPY', 'XOM', 'GOOG', 'GLD']
allocs = [0.4, 0.4, 0.1, 0.1]
dates = pd.date_range(start_date, end_date)
prices = get_data(symbols, dates)
print(prices.head())
plot_data(prices)
def get_normed_prices(prices):
return prices / prices.iloc[0].values
normed = get_normed_prices(prices)
print(normed.head())
plot_data(normed, title="Normed Price", ylabel="Normed Price")
def get_alloced_prices(allocs):
return normed * allocs
alloced = get_alloced_prices(allocs)
print(alloced.head())
def get_pos_vals(start_val):
return alloced * start_val
pos_vals = get_pos_vals(start_val)
print(pos_vals.head())
def get_portfolio_value(pos_vals):
return pos_vals.sum(axis=1)
port_val = get_portfolio_value(pos_vals)
plot_data(port_val, title="Portfolio Value", ylabel="Value")
port_dr = get_daily_returns(port_val)
plot_data(port_dr, title="Daily Return", ylabel="Daily Return")
dates = pd.date_range(start_date, end_date)
symbols = ['SPY']
df = get_data(symbols, dates)
dr = get_daily_returns(df)
dr['PORTFOLIO'] = port_dr
dr.plot(kind='scatter', x='SPY', y='PORTFOLIO')
beta, alpha = np.polyfit(dr['SPY'], port_dr, 1)
print("beta: " + str(beta))
print("alpha: " + str(alpha))
plt.plot(dr['SPY'], beta*dr['SPY'] + alpha, '-', color='r') # CAPM Equation
Explanation: Daily Portfolio Value
End of explanation
cum_ret = (port_val[-1] / port_val[0]) - 1
print("The Cumulative Return is %s." % cum_ret)
avg_daily_ret = port_dr.mean()
print("The Average Daily Return is %s." % avg_daily_ret)
std_daily_ret = port_dr.std()
print("The Standard Deviation of the Daily Return is %s." % std_daily_ret)
sr = np.sqrt(252) * (avg_daily_ret / std_daily_ret)
print("The Sharp Ratio is %s" % sr)
Explanation: Portfolio Statistics
End of explanation |
8,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: First we will make a default NormalFault.
Step2: This fault has a strike of NE and dips to the SE. Thus the uplifted nodes (shown in yellow) are in the NW half of the domain.
The default NormalFault will not uplift the boundary nodes. We change this by using the keyword argument include_boundaries. If this is specified, the elevation of the boundary nodes is calculated as an average of the faulted nodes adjacent to the boundaries. This occurs because most Landlab erosion components do not operate on boundary nodes.
Step3: We can add functionality to the NormalFault with other keyword arguments. We can change the fault strike and dip, as well as specify a time series of fault uplift through time.
Step4: By reversing the order of (x1, y1) and (x2, y2) we can reverse the location of the upthrown nodes (all else equal).
Step5: We can also specify complex time-rock uplift rate histories, but we'll explore that later in the tutorial.
Next let's make a landscape evolution model with a normal fault. Here we'll use a HexModelGrid to highlight that we can use both raster and non-raster grids with this component.
We will do a series of three numerical experiments and will want to keep a few parameters constant. Since you might want to change them, we are making it easier to change all of them together. They are defined in the next block
Step6: As we can see, the upper left portion of the grid has been uplifted an a stream network has developed over the whole domain.
How might this change when we also uplift the boundaries nodes?
Step7: We can see that when the boundary nodes are not included, the faulted region is impacted by the edge boundary conditions differently. Depending on your application, one or the other of these boundary condition options may suite your problem better.
The last thing to explore is the use of the fault_rate_through_time parameter. This allows us to specify generic fault throw rate histories. For example, consider the following history, in which every 100,000 years there is a 10,000 year period in which the fault is active.
Step8: The default value for uplift rate is 0.001 (units unspecified as it will depend on the x and t units in a model, but in this example we assume time units of years and length units of meters).
This will result in a total of 300 m of fault throw over the 300,000 year model time period. This amount of uplift can also be accommodated by faster fault motion that occurs over shorter periods of time.
Next we plot the cumulative fault throw for the two cases.
Step9: A technical note
Step10: As you can see the resulting topography is very different than in the case with continuous uplift.
For our final example, we'll use NormalFault with a more complicated model in which we have both a soil layer and bedrock. In order to move, material must convert from bedrock to soil by weathering.
First we import remaining modules and set some parameter values
Step11: Next we create the grid and run the model.
Step12: We can also examine the soil thickness and soil production rate. Here in the soil depth, we see it is highest along the ridge crests.
Step13: The soil production rate is highest where the soil depth is low, as we would expect given the exponential form. | Python Code:
# start by importing necessary modules
import matplotlib.pyplot as plt
import numpy as np
from landlab import HexModelGrid, RasterModelGrid
from landlab.components import (
FastscapeEroder,
FlowAccumulator,
NormalFault,
StreamPowerEroder,
)
from landlab.plot import imshow_grid
%matplotlib inline
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Introduction to the NormalFault component
This tutorial provides an introduction to the NormalFault component in the Landlab toolkit. This component takes the following parameters.
Parameters
--------
grid : ModelGrid
faulted_surface : str or ndarray of shape `(n_nodes, )` or list of str
or ndarrays.
Surface that is modified by the NormalFault component. Can be a
field name or array or a list of strings or ndarrays if the fault.
should uplift more than one field. Default value is
`topographic__elevation`.
fault_throw_rate_through_time : dict, optional
Dictionary that specifies the time varying throw rate on the fault.
Expected format is:
``fault_throw_rate_through_time = {'time': array, 'rate': array}``
Default value is a constant rate of 0.001 (units not specified).
fault_dip_angle : float, optional
Dip angle of the fault in degrees. Default value is 90 degrees.
fault_trace : dictionary, optional
Dictionary that specifies the coordinates of two locations on the
fault trace. Expected format is
``fault_trace = {'x1': float, 'y1': float, 'x2': float, 'y2': float}``
where the vector from ``(x1, y1)`` to ``(x2, y2)`` defines the
strike of the fault trace. The orientation of the fault dip relative
to the strike follows the right hand rule.
Default is for the fault to strike NE.
include_boundaries : boolean, optional
Flag to indicate if model grid boundaries should be uplifted. If
set to ``True`` uplifted model grid boundaries will be set to the
average value of their upstream nodes. Default value is False.
The NormalFault component will divide the model domain into two regions, a 'faulted nodes' region which will experience vertical rock uplift at a rate of
$t \cdot \sin (d)$
where $t$ is the fault throw rate and $d$ is the fault dip angle.
While dip angles less than 90 degrees are permitted, in its present implementation, the NormalFault component does not translate field information laterally.
The fault orientation is specified by two coordinate pairs: (x1, y1) and (x2, y2). The strike of the fault, specified with the right-hand rule convention, is the vector from (x1, y1) to (x2, y2). Give that this component creates a normal fault, in which the footwall moves up relative to the hanging wall, this means that the nodes that are counterclockwise from the strike are the uplifted nodes.
To start, let's import necessary Landlab and Python modules.
End of explanation
grid = RasterModelGrid((6, 6), xy_spacing=10)
grid.add_zeros("topographic__elevation", at="node")
nf = NormalFault(grid)
plt.figure()
imshow_grid(grid, nf.faulted_nodes.astype(int), cmap="viridis")
plt.plot(grid.x_of_node, grid.y_of_node, "c.")
plt.show()
Explanation: First we will make a default NormalFault.
End of explanation
nf = NormalFault(grid, include_boundaries=True)
plt.figure()
imshow_grid(grid, nf.faulted_nodes.astype(int), cmap="viridis")
plt.plot(grid.x_of_node, grid.y_of_node, "c.")
plt.show()
Explanation: This fault has a strike of NE and dips to the SE. Thus the uplifted nodes (shown in yellow) are in the NW half of the domain.
The default NormalFault will not uplift the boundary nodes. We change this by using the keyword argument include_boundaries. If this is specified, the elevation of the boundary nodes is calculated as an average of the faulted nodes adjacent to the boundaries. This occurs because most Landlab erosion components do not operate on boundary nodes.
End of explanation
grid = RasterModelGrid((60, 100), xy_spacing=10)
z = grid.add_zeros("topographic__elevation", at="node")
nf = NormalFault(grid, fault_trace={"x1": 0, "y1": 200, "y2": 30, "x2": 600})
imshow_grid(grid, nf.faulted_nodes.astype(int), cmap="viridis")
Explanation: We can add functionality to the NormalFault with other keyword arguments. We can change the fault strike and dip, as well as specify a time series of fault uplift through time.
End of explanation
grid = RasterModelGrid((60, 100), xy_spacing=10)
z = grid.add_zeros("topographic__elevation", at="node")
nf = NormalFault(grid, fault_trace={"y1": 30, "x1": 600, "x2": 0, "y2": 200})
imshow_grid(grid, nf.faulted_nodes.astype(int), cmap="viridis")
Explanation: By reversing the order of (x1, y1) and (x2, y2) we can reverse the location of the upthrown nodes (all else equal).
End of explanation
# here are the parameters to change
K = 0.0005 # stream power coefficient, bigger = streams erode more quickly
U = 0.0001 # uplift rate in meters per year
dt = 1000 # time step in years
dx = 10 # space step in meters
nr = 60 # number of model rows
nc = 100 # number of model columns
# instantiate the grid
grid = HexModelGrid((nr, nc), dx, node_layout="rect")
# add a topographic__elevation field with noise
z = grid.add_zeros("topographic__elevation", at="node")
z[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size)
fr = FlowAccumulator(grid)
fs = FastscapeEroder(grid, K_sp=K)
nf = NormalFault(grid, fault_trace={"x1": 0, "x2": 800, "y1": 0, "y2": 500})
# Run this model for 300 100-year timesteps (30,000 years).
for i in range(300):
nf.run_one_step(dt)
fr.run_one_step()
fs.run_one_step(dt)
z[grid.core_nodes] += 0.0001 * dt
# plot the final topography
imshow_grid(grid, z)
Explanation: We can also specify complex time-rock uplift rate histories, but we'll explore that later in the tutorial.
Next let's make a landscape evolution model with a normal fault. Here we'll use a HexModelGrid to highlight that we can use both raster and non-raster grids with this component.
We will do a series of three numerical experiments and will want to keep a few parameters constant. Since you might want to change them, we are making it easier to change all of them together. They are defined in the next block:
End of explanation
# instantiate the grid
grid = HexModelGrid((nr, nc), 10, node_layout="rect")
# add a topographic__elevation field with noise
z = grid.add_zeros("topographic__elevation", at="node")
z[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size)
fr = FlowAccumulator(grid)
fs = FastscapeEroder(grid, K_sp=K)
nf = NormalFault(
grid, fault_trace={"x1": 0, "x2": 800, "y1": 0, "y2": 500}, include_boundaries=True
)
# Run this model for 300 100-year timesteps (30,000 years).
for i in range(300):
nf.run_one_step(dt)
fr.run_one_step()
fs.run_one_step(dt)
z[grid.core_nodes] += U * dt
# plot the final topography
imshow_grid(grid, z)
Explanation: As we can see, the upper left portion of the grid has been uplifted an a stream network has developed over the whole domain.
How might this change when we also uplift the boundaries nodes?
End of explanation
time = (
np.array(
[
0.0,
7.99,
8.00,
8.99,
9.0,
17.99,
18.0,
18.99,
19.0,
27.99,
28.00,
28.99,
29.0,
]
)
* 10
* dt
)
rate = np.array([0, 0, 0.01, 0.01, 0, 0, 0.01, 0.01, 0, 0, 0.01, 0.01, 0])
plt.figure()
plt.plot(time, rate)
plt.plot([0, 300 * dt], [0.001, 0.001])
plt.xlabel("Time [years]")
plt.ylabel("Fault Throw Rate [m/yr]")
plt.show()
Explanation: We can see that when the boundary nodes are not included, the faulted region is impacted by the edge boundary conditions differently. Depending on your application, one or the other of these boundary condition options may suite your problem better.
The last thing to explore is the use of the fault_rate_through_time parameter. This allows us to specify generic fault throw rate histories. For example, consider the following history, in which every 100,000 years there is a 10,000 year period in which the fault is active.
End of explanation
t = np.arange(0, 300 * dt, dt)
rate_constant = np.interp(t, [0, 300 * dt], [0.001, 0.001])
rate_variable = np.interp(t, time, rate)
cumulative_rock_uplift_constant = np.cumsum(rate_constant) * dt
cumulative_rock_uplift_variable = np.cumsum(rate_variable) * dt
plt.figure()
plt.plot(t, cumulative_rock_uplift_constant)
plt.plot(t, cumulative_rock_uplift_variable)
plt.xlabel("Time [years]")
plt.ylabel("Cumulative Fault Throw [m]")
plt.show()
Explanation: The default value for uplift rate is 0.001 (units unspecified as it will depend on the x and t units in a model, but in this example we assume time units of years and length units of meters).
This will result in a total of 300 m of fault throw over the 300,000 year model time period. This amount of uplift can also be accommodated by faster fault motion that occurs over shorter periods of time.
Next we plot the cumulative fault throw for the two cases.
End of explanation
# instantiate the grid
grid = HexModelGrid((nr, nc), 10, node_layout="rect")
# add a topographic__elevation field with noise
z = grid.add_zeros("topographic__elevation", at="node")
z[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size)
fr = FlowAccumulator(grid)
fs = FastscapeEroder(grid, K_sp=K)
nf = NormalFault(
grid,
fault_throw_rate_through_time={"time": time, "rate": rate},
fault_trace={"x1": 0, "x2": 800, "y1": 0, "y2": 500},
include_boundaries=True,
)
# Run this model for 300 100-year timesteps (30,000 years).
for i in range(300):
nf.run_one_step(dt)
fr.run_one_step()
fs.run_one_step(dt)
z[grid.core_nodes] += U * dt
# plot the final topography
imshow_grid(grid, z)
Explanation: A technical note: Beyond the times specified, the internal workings of the NormalFault will use the final value provided in the rate array.
Let's see how this changes the model results.
End of explanation
from landlab.components import DepthDependentDiffuser, ExponentialWeatherer
# here are the parameters to change
K = 0.0005 # stream power coefficient, bigger = streams erode more quickly
U = 0.0001 # uplift rate in meters per year
max_soil_production_rate = (
0.001 # Maximum weathering rate for bare bedrock in meters per year
)
soil_production_decay_depth = 0.7 # Characteristic weathering depth in meters
linear_diffusivity = 0.01 # Hillslope diffusivity and m2 per years
soil_transport_decay_depth = 0.5 # Characteristic soil transport depth in meters
dt = 100 # time step in years
dx = 10 # space step in meters
nr = 60 # number of model rows
nc = 100 # number of model columns
?ExponentialWeatherer
Explanation: As you can see the resulting topography is very different than in the case with continuous uplift.
For our final example, we'll use NormalFault with a more complicated model in which we have both a soil layer and bedrock. In order to move, material must convert from bedrock to soil by weathering.
First we import remaining modules and set some parameter values
End of explanation
# instantiate the grid
grid = HexModelGrid((nr, nc), 10, node_layout="rect")
# add a topographic__elevation field with noise
z = grid.add_zeros("topographic__elevation", at="node")
z[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size)
# create a field for soil depth
d = grid.add_zeros("soil__depth", at="node")
# create a bedrock elevation field
b = grid.add_zeros("bedrock__elevation", at="node")
b[:] = z - d
fr = FlowAccumulator(grid, depression_finder="DepressionFinderAndRouter", routing="D4")
fs = FastscapeEroder(grid, K_sp=K)
ew = ExponentialWeatherer(
grid,
soil_production__decay_depth=soil_production_decay_depth,
soil_production__maximum_rate=max_soil_production_rate,
)
dd = DepthDependentDiffuser(
grid,
linear_diffusivity=linear_diffusivity,
soil_transport_decay_depth=soil_transport_decay_depth,
)
nf = NormalFault(
grid,
fault_throw_rate_through_time={"time": [0, 30], "rate": [0.001, 0.001]},
fault_trace={"x1": 0, "x2": 800, "y1": 0, "y2": 500},
include_boundaries=False,
)
# Run this model for 300 100-year timesteps (30,000 years).
for i in range(300):
# Move normal fault
nf.run_one_step(dt)
# Route flow
fr.run_one_step()
# Erode with water
fs.run_one_step(dt)
# We must also now erode the bedrock where relevant. If water erosion
# into bedrock has occurred, the bedrock elevation will be higher than
# the actual elevation, so we simply re-set bedrock elevation to the
# lower of itself or the current elevation.
b = grid.at_node["bedrock__elevation"]
b[:] = np.minimum(b, grid.at_node["topographic__elevation"])
# Calculate regolith-production rate
ew.calc_soil_prod_rate()
# Generate and move soil around. This component will update both the
# soil thickness and topographic elevation fields.
dd.run_one_step(dt)
# uplift the whole domain, we need to do this to both bedrock and topography
z[grid.core_nodes] += U * dt
b[grid.core_nodes] += U * dt
# plot the final topography
imshow_grid(grid, "topographic__elevation")
Explanation: Next we create the grid and run the model.
End of explanation
# and the soil depth
imshow_grid(grid, "soil__depth", cmap="viridis")
Explanation: We can also examine the soil thickness and soil production rate. Here in the soil depth, we see it is highest along the ridge crests.
End of explanation
# and the soil production rate
imshow_grid(grid, "soil_production__rate", cmap="viridis")
Explanation: The soil production rate is highest where the soil depth is low, as we would expect given the exponential form.
End of explanation |
8,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gensim Tutorial on Online Non-Negative Matrix Factorization
This notebooks explains basic ideas behind the open source NMF implementation in Gensim, including code examples for applying NMF to text processing.
What's in this tutorial?
Introduction
Step1: Dataset preparation
Let's load the notorious 20 Newsgroups dataset from Gensim's repository of pre-trained models and corpora
Step2: Create a train/test split
Step3: We'll use very simple preprocessing with stemming to tokenize each document. YMMV; in your application, use whatever preprocessing makes sense in your domain. Correctly preparing the input has major impact on any subsequent ML training.
Step4: Dictionary compilation
Let's create a mapping between tokens and their ids. Another option would be a HashDictionary, saving ourselves one pass over the training documents.
Step5: Create training corpus
Let's vectorize the training corpus into the bag-of-words format. We'll train LDA on a BOW and NMFs on an TF-IDF corpus
Step6: Here we simply stored the bag-of-words vectors into a list, but Gensim accepts any iterable as input, including streamed ones. To learn more about memory-efficient input iterables, see our Data Streaming in Python
Step7: View the learned topics
Step8: Evaluation measure
Step9: Topic inference on new documents
With the NMF model trained, let's fetch one news document not seen during training, and infer its topic vector.
Step10: Word topic inference
Similarly, we can inspect the topic distribution assigned to a vocabulary term
Step11: Internal NMF state
Density is a fraction of non-zero elements in a matrix.
Step12: Term-topic matrix of shape (words, topics).
Step13: Topic-document matrix for the last batch of shape (topics, batch)
Step14: 3. Benchmarks
Gensim NMF vs Sklearn NMF vs Gensim LDA
We'll run these three unsupervised models on the 20newsgroups dataset.
20 Newsgroups also contains labels for each document, which will allow us to evaluate the trained models on an "upstream" classification task, using the unsupervised document topics as input features.
Metrics
We'll track these metrics as we train and test NMF on the 20-newsgroups corpus we created above
Step15: Run the models
Step16: Benchmark results
Step17: Main insights
Gensim NMF is ridiculously fast and leaves both LDA and Sklearn far behind in terms of training time and quality on downstream task (F1 score), though coherence is the lowest among all models.
Gensim NMF beats Sklearn NMF in RAM consumption, but L2 norm is a bit worse.
Gensim NMF consumes a bit more RAM than LDA.
Learned topics
Let's inspect the 5 topics learned by each of the three models
Step18: Subjectively, Gensim and Sklearn NMFs are on par with each other, LDA looks a bit worse.
4. NMF on English Wikipedia
This section shows how to train an NMF model on a large text corpus, the entire English Wikipedia
Step19: Load the Wikipedia dump
We'll use the gensim.downloader to download a parsed Wikipedia dump (6.1 GB disk space)
Step20: Print the titles and sections of the first Wikipedia article, as a little sanity check
Step22: Let's create a Python generator function that streams through the downloaded Wikipedia dump and preprocesses (tokenizes, lower-cases) each article
Step23: Create a word-to-id mapping, in order to vectorize texts. Makes a full pass over the Wikipedia corpus, takes ~3.5 hours
Step24: Store preprocessed Wikipedia as bag-of-words sparse matrix in MatrixMarket format
When training NMF with a single pass over the input corpus ("online"), we simply vectorize each raw text straight from the input storage
Step26: For the purposes of this tutorial though, we'll serialize ("cache") the vectorized bag-of-words vectors to disk, to wiki.mm file in MatrixMarket format. The reason is, we'll be re-using the vectorized articles multiple times, for different models for our benchmarks, and also shuffling them, so it makes sense to amortize the vectorization time by persisting the resulting vectors to disk.
So, let's stream through the preprocessed sparse Wikipedia bag-of-words matrix while storing it to disk. This step takes about 3 hours and needs 38 GB of disk space
Step27: Save preprocessed Wikipedia in scipy.sparse format
This is only needed to run the Sklearn NMF on Wikipedia, for comparison in the benchmarks below. Sklearn expects in-memory scipy sparse input, not on-the-fly vector streams. Needs additional ~2 GB of disk space.
Skip this step if you don't need the Sklearn's NMF benchmark, and only want to run Gensim's NMF.
Step28: Metrics
We'll track these metrics as we train and test NMF on the Wikipedia corpus we created above
Step29: Define common parameters, to be shared by all evaluated models
Step30: Wikipedia training
Train Gensim NMF model and record its metrics
Step31: Train Gensim LDA and record its metrics
Step32: Train Sklearn NMF and record its metrics
Careful! Sklearn loads the entire input Wikipedia matrix into RAM. Even though the matrix is sparse, you'll need FIXME GB of free RAM to run the cell below.
Step33: Wikipedia results
Step34: Insights
Gensim's online NMF outperforms Sklearn's NMF in terms of speed and RAM consumption
Step35: It seems all three models successfully learned useful topics from the Wikipedia corpus.
5. And now for something completely different
Step37: Modified face decomposition notebook
Adapted from the excellent Scikit-learn tutorial (BSD license) | Python Code:
import logging
import time
from contextlib import contextmanager
import os
from multiprocessing import Process
import psutil
import numpy as np
import pandas as pd
from numpy.random import RandomState
from sklearn import decomposition
from sklearn.cluster import MiniBatchKMeans
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition.nmf import NMF as SklearnNmf
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import f1_score
import gensim.downloader
from gensim import matutils, utils
from gensim.corpora import Dictionary
from gensim.models import CoherenceModel, LdaModel, TfidfModel
from gensim.models.basemodel import BaseTopicModel
from gensim.models.nmf import Nmf as GensimNmf
from gensim.parsing.preprocessing import preprocess_string
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Gensim Tutorial on Online Non-Negative Matrix Factorization
This notebooks explains basic ideas behind the open source NMF implementation in Gensim, including code examples for applying NMF to text processing.
What's in this tutorial?
Introduction: Why NMF?
Code example on 20 Newsgroups
Benchmarks against Sklearn's NMF and Gensim's LDA
Large-scale NMF training on the English Wikipedia (sparse text vectors)
NMF on face decomposition (dense image vectors)
1. Introduction to NMF
What's in a name?
Gensim's Online Non-Negative Matrix Factorization (NMF, NNMF, ONMF) implementation is based on Renbo Zhao, Vincent Y. F. Tan: Online Nonnegative Matrix Factorization with Outliers, 2016 and is optimized for extremely large, sparse, streamed inputs. Such inputs happen in NLP with unsupervised training on massive text corpora.
Why Online? Because corpora and datasets in modern ML can be very large, and RAM is limited. Unlike batch algorithms, online algorithms learn iteratively, streaming through the available training examples, without loading the entire dataset into RAM or requiring random-access to the data examples.
Why Non-Negative? Because non-negativity leads to more interpretable, sparse "human-friendly" topics. This is in contrast to e.g. SVD (another popular matrix factorization method with super-efficient implementation in Gensim), which produces dense negative factors and thus harder-to-interpret topics.
Matrix factorizations are the corner stone of modern machine learning. They can be used either directly (recommendation systems, bi-clustering, image compression, topic modeling…) or as internal routines in more complex deep learning algorithms.
How ONNMF works
Terminology:
- corpus is a stream of input documents = training examples
- batch is a chunk of input corpus, a word-document matrix mini-batch that fits in RAM
- W is a word-topic matrix (to be learned; stored in the resulting model)
- h is a topic-document matrix (to be learned; not stored, but rather inferred for documents on-the-fly)
- A, B - matrices that accumulate information from consecutive chunks. A = h.dot(ht), B = v.dot(ht).
The idea behind the algorithm is as follows:
```
Initialize W, A and B matrices
for batch in input corpus batches:
infer h:
do coordinate gradient descent step to find h that minimizes ||batch - Wh|| in L2 norm
bound h so that it is non-negative
update A and B:
A = h.dot(ht)
B = batch.dot(ht)
update W:
do gradient descent step to find W that minimizes ||0.5*trace(WtWA) - trace(WtB)|| in L2 norm
```
2. Code example: NMF on 20 Newsgroups
Preprocessing
Let's import the models we'll be using throughout this tutorial (numpy==1.14.2, matplotlib==3.0.2, pandas==0.24.1, sklearn==0.19.1, gensim==3.7.1) and set up logging at INFO level.
Gensim uses logging generously to inform users what's going on. Eyeballing the logs is a good sanity check, to make sure everything is working as expected.
Only numpy and gensim are actually needed to train and use NMF. The other imports are used only to make our life a little easier in this tutorial.
End of explanation
newsgroups = gensim.downloader.load('20-newsgroups')
categories = [
'alt.atheism',
'comp.graphics',
'rec.motorcycles',
'talk.politics.mideast',
'sci.space'
]
categories = {name: idx for idx, name in enumerate(categories)}
Explanation: Dataset preparation
Let's load the notorious 20 Newsgroups dataset from Gensim's repository of pre-trained models and corpora:
End of explanation
random_state = RandomState(42)
trainset = np.array([
{
'data': doc['data'],
'target': categories[doc['topic']],
}
for doc in newsgroups
if doc['topic'] in categories and doc['set'] == 'train'
])
random_state.shuffle(trainset)
testset = np.array([
{
'data': doc['data'],
'target': categories[doc['topic']],
}
for doc in newsgroups
if doc['topic'] in categories and doc['set'] == 'test'
])
random_state.shuffle(testset)
Explanation: Create a train/test split:
End of explanation
train_documents = [preprocess_string(doc['data']) for doc in trainset]
test_documents = [preprocess_string(doc['data']) for doc in testset]
Explanation: We'll use very simple preprocessing with stemming to tokenize each document. YMMV; in your application, use whatever preprocessing makes sense in your domain. Correctly preparing the input has major impact on any subsequent ML training.
End of explanation
dictionary = Dictionary(train_documents)
dictionary.filter_extremes(no_below=5, no_above=0.5, keep_n=20000) # filter out too in/frequent tokens
Explanation: Dictionary compilation
Let's create a mapping between tokens and their ids. Another option would be a HashDictionary, saving ourselves one pass over the training documents.
End of explanation
tfidf = TfidfModel(dictionary=dictionary)
train_corpus = [
dictionary.doc2bow(document)
for document
in train_documents
]
test_corpus = [
dictionary.doc2bow(document)
for document
in test_documents
]
train_corpus_tfidf = list(tfidf[train_corpus])
test_corpus_tfidf = list(tfidf[test_corpus])
Explanation: Create training corpus
Let's vectorize the training corpus into the bag-of-words format. We'll train LDA on a BOW and NMFs on an TF-IDF corpus:
End of explanation
%%time
nmf = GensimNmf(
corpus=train_corpus_tfidf,
num_topics=5,
id2word=dictionary,
chunksize=1000,
passes=5,
eval_every=10,
minimum_probability=0,
random_state=0,
kappa=1,
)
Explanation: Here we simply stored the bag-of-words vectors into a list, but Gensim accepts any iterable as input, including streamed ones. To learn more about memory-efficient input iterables, see our Data Streaming in Python: Generators, Iterators, Iterables tutorial.
NMF Model Training
The API works in the same way as other Gensim models, such as LdaModel or LsiModel.
Notable model parameters:
kappa float, optional
Gradient descent step size.
Larger value makes the model train faster, but could lead to non-convergence if set too large.
w_max_iter int, optional
Maximum number of iterations to train W per each batch.
w_stop_condition float, optional
If the error difference gets smaller than this, training of W stops for the current batch.
h_r_max_iter int, optional
Maximum number of iterations to train h per each batch.
h_r_stop_condition float, optional
If the error difference gets smaller than this, training of h stops for the current batch.
Learn an NMF model with 5 topics:
End of explanation
nmf.show_topics()
Explanation: View the learned topics
End of explanation
CoherenceModel(
model=nmf,
corpus=test_corpus_tfidf,
coherence='u_mass'
).get_coherence()
Explanation: Evaluation measure: Coherence
Topic coherence measures how often do most frequent tokens from each topic co-occur in one document. Larger is better.
End of explanation
print(testset[0]['data'])
print('=' * 100)
print("Topics: {}".format(nmf[test_corpus[0]]))
Explanation: Topic inference on new documents
With the NMF model trained, let's fetch one news document not seen during training, and infer its topic vector.
End of explanation
word = dictionary[0]
print("Word: {}".format(word))
print("Topics: {}".format(nmf.get_term_topics(word)))
Explanation: Word topic inference
Similarly, we can inspect the topic distribution assigned to a vocabulary term:
End of explanation
def density(matrix):
return (matrix > 0).mean()
Explanation: Internal NMF state
Density is a fraction of non-zero elements in a matrix.
End of explanation
print("Density: {}".format(density(nmf._W)))
Explanation: Term-topic matrix of shape (words, topics).
End of explanation
print("Density: {}".format(density(nmf._h)))
Explanation: Topic-document matrix for the last batch of shape (topics, batch)
End of explanation
fixed_params = dict(
chunksize=1000,
num_topics=5,
id2word=dictionary,
passes=5,
eval_every=10,
minimum_probability=0,
random_state=0,
)
@contextmanager
def measure_ram(output, tick=5):
def _measure_ram(pid, output, tick=tick):
py = psutil.Process(pid)
with open(output, 'w') as outfile:
while True:
memory = py.memory_info().rss
outfile.write("{}\n".format(memory))
outfile.flush()
time.sleep(tick)
pid = os.getpid()
p = Process(target=_measure_ram, args=(pid, output, tick))
p.start()
yield
p.terminate()
def get_train_time_and_ram(func, name, tick=5):
memprof_filename = "{}.memprof".format(name)
start = time.time()
with measure_ram(memprof_filename, tick=tick):
result = func()
elapsed_time = pd.to_timedelta(time.time() - start, unit='s').round('s')
memprof_df = pd.read_csv(memprof_filename, squeeze=True)
mean_ram = "{} MB".format(
int(memprof_df.mean() // 2 ** 20),
)
max_ram = "{} MB".format(int(memprof_df.max() // 2 ** 20))
return elapsed_time, mean_ram, max_ram, result
def get_f1(model, train_corpus, X_test, y_train, y_test):
if isinstance(model, SklearnNmf):
dense_train_corpus = matutils.corpus2dense(
train_corpus,
num_terms=model.components_.shape[1],
)
X_train = model.transform(dense_train_corpus.T)
else:
X_train = np.zeros((len(train_corpus), model.num_topics))
for bow_id, bow in enumerate(train_corpus):
for topic_id, word_count in model.get_document_topics(bow):
X_train[bow_id, topic_id] = word_count
log_reg = LogisticRegressionCV(multi_class='multinomial', cv=5)
log_reg.fit(X_train, y_train)
pred_labels = log_reg.predict(X_test)
return f1_score(y_test, pred_labels, average='micro')
def get_sklearn_topics(model, top_n=5):
topic_probas = model.components_.T
topic_probas = topic_probas / topic_probas.sum(axis=0)
sparsity = np.zeros(topic_probas.shape[1])
for row in topic_probas:
sparsity += (row == 0)
sparsity /= topic_probas.shape[1]
topic_probas = topic_probas[:, sparsity.argsort()[::-1]][:, :top_n]
token_indices = topic_probas.argsort(axis=0)[:-11:-1, :]
topic_probas.sort(axis=0)
topic_probas = topic_probas[:-11:-1, :]
topics = []
for topic_idx in range(topic_probas.shape[1]):
tokens = [
model.id2word[token_idx]
for token_idx
in token_indices[:, topic_idx]
]
topic = (
'{}*"{}"'.format(round(proba, 3), token)
for proba, token
in zip(topic_probas[:, topic_idx], tokens)
)
topic = " + ".join(topic)
topics.append((topic_idx, topic))
return topics
def get_metrics(model, test_corpus, train_corpus=None, y_train=None, y_test=None, dictionary=None):
if isinstance(model, SklearnNmf):
model.get_topics = lambda: model.components_
model.show_topics = lambda top_n: get_sklearn_topics(model, top_n)
model.id2word = dictionary
W = model.get_topics().T
dense_test_corpus = matutils.corpus2dense(
test_corpus,
num_terms=W.shape[0],
)
if isinstance(model, SklearnNmf):
H = model.transform(dense_test_corpus.T).T
else:
H = np.zeros((model.num_topics, len(test_corpus)))
for bow_id, bow in enumerate(test_corpus):
for topic_id, word_count in model.get_document_topics(bow):
H[topic_id, bow_id] = word_count
l2_norm = None
if not isinstance(model, LdaModel):
pred_factors = W.dot(H)
l2_norm = np.linalg.norm(pred_factors - dense_test_corpus)
l2_norm = round(l2_norm, 4)
f1 = None
if train_corpus and y_train and y_test:
f1 = get_f1(model, train_corpus, H.T, y_train, y_test)
f1 = round(f1, 4)
model.normalize = True
coherence = CoherenceModel(
model=model,
corpus=test_corpus,
coherence='u_mass'
).get_coherence()
coherence = round(coherence, 4)
topics = model.show_topics(5)
model.normalize = False
return dict(
coherence=coherence,
l2_norm=l2_norm,
f1=f1,
topics=topics,
)
Explanation: 3. Benchmarks
Gensim NMF vs Sklearn NMF vs Gensim LDA
We'll run these three unsupervised models on the 20newsgroups dataset.
20 Newsgroups also contains labels for each document, which will allow us to evaluate the trained models on an "upstream" classification task, using the unsupervised document topics as input features.
Metrics
We'll track these metrics as we train and test NMF on the 20-newsgroups corpus we created above:
- train time - time to train a model
- mean_ram - mean RAM consumption during training
- max_ram - maximum RAM consumption during training
- train time - time to train a model.
- coherence - coherence score (larger is better).
- l2_norm - L2 norm of v - Wh (less is better, not defined for LDA).
- f1 - F1 score on the task of news topic classification (larger is better).
End of explanation
tm_metrics = pd.DataFrame(columns=['model', 'train_time', 'coherence', 'l2_norm', 'f1', 'topics'])
y_train = [doc['target'] for doc in trainset]
y_test = [doc['target'] for doc in testset]
# LDA metrics
row = {}
row['model'] = 'lda'
row['train_time'], row['mean_ram'], row['max_ram'], lda = get_train_time_and_ram(
lambda: LdaModel(
corpus=train_corpus,
**fixed_params,
),
'lda',
1,
)
row.update(get_metrics(
lda, test_corpus, train_corpus, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# Sklearn NMF metrics
row = {}
row['model'] = 'sklearn_nmf'
train_dense_corpus_tfidf = matutils.corpus2dense(train_corpus_tfidf, len(dictionary)).T
row['train_time'], row['mean_ram'], row['max_ram'], sklearn_nmf = get_train_time_and_ram(
lambda: SklearnNmf(n_components=5, random_state=42).fit(train_dense_corpus_tfidf),
'sklearn_nmf',
1,
)
row.update(get_metrics(
sklearn_nmf, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test, dictionary,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# Gensim NMF metrics
row = {}
row['model'] = 'gensim_nmf'
row['train_time'], row['mean_ram'], row['max_ram'], gensim_nmf = get_train_time_and_ram(
lambda: GensimNmf(
normalize=False,
corpus=train_corpus_tfidf,
**fixed_params
),
'gensim_nmf',
0.5,
)
row.update(get_metrics(
gensim_nmf, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
tm_metrics.replace(np.nan, '-', inplace=True)
Explanation: Run the models
End of explanation
tm_metrics.drop('topics', axis=1)
Explanation: Benchmark results
End of explanation
def compare_topics(tm_metrics):
for _, row in tm_metrics.iterrows():
print('\n{}:'.format(row.model))
print("\n".join(str(topic) for topic in row.topics))
compare_topics(tm_metrics)
Explanation: Main insights
Gensim NMF is ridiculously fast and leaves both LDA and Sklearn far behind in terms of training time and quality on downstream task (F1 score), though coherence is the lowest among all models.
Gensim NMF beats Sklearn NMF in RAM consumption, but L2 norm is a bit worse.
Gensim NMF consumes a bit more RAM than LDA.
Learned topics
Let's inspect the 5 topics learned by each of the three models:
End of explanation
# Re-import modules from scratch, so that this Section doesn't rely on any previous cells.
import itertools
import json
import logging
import time
import os
from smart_open import smart_open
import psutil
import numpy as np
import scipy.sparse
from contextlib import contextmanager, contextmanager, contextmanager
from multiprocessing import Process
from tqdm import tqdm, tqdm_notebook
import joblib
import pandas as pd
from sklearn.decomposition.nmf import NMF as SklearnNmf
import gensim.downloader
from gensim import matutils
from gensim.corpora import MmCorpus, Dictionary
from gensim.models import LdaModel, LdaMulticore, CoherenceModel
from gensim.models.nmf import Nmf as GensimNmf
from gensim.utils import simple_preprocess
tqdm.pandas()
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Subjectively, Gensim and Sklearn NMFs are on par with each other, LDA looks a bit worse.
4. NMF on English Wikipedia
This section shows how to train an NMF model on a large text corpus, the entire English Wikipedia: 2.6 billion words, in 23.1 million article sections across 5 million Wikipedia articles.
The data preprocessing takes a while, and we'll be comparing multiple models, so reserve about FIXME hours and some 20 GB of disk space to go through the following notebook cells in full. You'll need gensim>=3.7.1, numpy, tqdm, pandas, psutils, joblib and sklearn.
End of explanation
data = gensim.downloader.load("wiki-english-20171001")
Explanation: Load the Wikipedia dump
We'll use the gensim.downloader to download a parsed Wikipedia dump (6.1 GB disk space):
End of explanation
data = gensim.downloader.load("wiki-english-20171001")
article = next(iter(data))
print("Article: %r\n" % article['title'])
for section_title, section_text in zip(article['section_titles'], article['section_texts']):
print("Section title: %r" % section_title)
print("Section text: %s…\n" % section_text[:100].replace('\n', ' ').strip())
Explanation: Print the titles and sections of the first Wikipedia article, as a little sanity check:
End of explanation
def wikidump2tokens(articles):
Stream through the Wikipedia dump, yielding a list of tokens for each article.
for article in articles:
article_section_texts = [
" ".join([title, text])
for title, text
in zip(article['section_titles'], article['section_texts'])
]
article_tokens = simple_preprocess(" ".join(article_section_texts))
yield article_tokens
Explanation: Let's create a Python generator function that streams through the downloaded Wikipedia dump and preprocesses (tokenizes, lower-cases) each article:
End of explanation
if os.path.exists('wiki.dict'):
# If we already stored the Dictionary in a previous run, simply load it, to save time.
dictionary = Dictionary.load('wiki.dict')
else:
dictionary = Dictionary(wikidump2tokens(data))
# Keep only the 30,000 most frequent vocabulary terms, after filtering away terms
# that are too frequent/too infrequent.
dictionary.filter_extremes(no_below=5, no_above=0.5, keep_n=30000)
dictionary.save('wiki.dict')
Explanation: Create a word-to-id mapping, in order to vectorize texts. Makes a full pass over the Wikipedia corpus, takes ~3.5 hours:
End of explanation
vector_stream = (dictionary.doc2bow(article) for article in wikidump2tokens(data))
Explanation: Store preprocessed Wikipedia as bag-of-words sparse matrix in MatrixMarket format
When training NMF with a single pass over the input corpus ("online"), we simply vectorize each raw text straight from the input storage:
End of explanation
class RandomSplitCorpus(MmCorpus):
Use the fact that MmCorpus supports random indexing, and create a streamed
corpus in shuffled order, including a train/test split for evaluation.
def __init__(self, random_seed=42, testset=False, testsize=1000, *args, **kwargs):
super().__init__(*args, **kwargs)
random_state = np.random.RandomState(random_seed)
self.indices = random_state.permutation(range(self.num_docs))
test_nnz = sum(len(self[doc_idx]) for doc_idx in self.indices[:testsize])
if testset:
self.indices = self.indices[:testsize]
self.num_docs = testsize
self.num_nnz = test_nnz
else:
self.indices = self.indices[testsize:]
self.num_docs -= testsize
self.num_nnz -= test_nnz
def __iter__(self):
for doc_id in self.indices:
yield self[doc_id]
if not os.path.exists('wiki.mm'):
MmCorpus.serialize('wiki.mm', vector_stream, progress_cnt=100000)
if not os.path.exists('wiki_tfidf.mm'):
MmCorpus.serialize('wiki_tfidf.mm', tfidf[MmCorpus('wiki.mm')], progress_cnt=100000)
# Load back the vectors as two lazily-streamed train/test iterables.
train_corpus = RandomSplitCorpus(
random_seed=42, testset=False, testsize=10000, fname='wiki.mm',
)
test_corpus = RandomSplitCorpus(
random_seed=42, testset=True, testsize=10000, fname='wiki.mm',
)
train_corpus_tfidf = RandomSplitCorpus(
random_seed=42, testset=False, testsize=10000, fname='wiki_tfidf.mm',
)
test_corpus_tfidf = RandomSplitCorpus(
random_seed=42, testset=True, testsize=10000, fname='wiki_tfidf.mm',
)
Explanation: For the purposes of this tutorial though, we'll serialize ("cache") the vectorized bag-of-words vectors to disk, to wiki.mm file in MatrixMarket format. The reason is, we'll be re-using the vectorized articles multiple times, for different models for our benchmarks, and also shuffling them, so it makes sense to amortize the vectorization time by persisting the resulting vectors to disk.
So, let's stream through the preprocessed sparse Wikipedia bag-of-words matrix while storing it to disk. This step takes about 3 hours and needs 38 GB of disk space:
End of explanation
if not os.path.exists('wiki_train_csr.npz'):
scipy.sparse.save_npz(
'wiki_train_csr.npz',
matutils.corpus2csc(train_corpus_tfidf, len(dictionary)).T,
)
Explanation: Save preprocessed Wikipedia in scipy.sparse format
This is only needed to run the Sklearn NMF on Wikipedia, for comparison in the benchmarks below. Sklearn expects in-memory scipy sparse input, not on-the-fly vector streams. Needs additional ~2 GB of disk space.
Skip this step if you don't need the Sklearn's NMF benchmark, and only want to run Gensim's NMF.
End of explanation
tm_metrics = pd.DataFrame(columns=[
'model', 'train_time', 'mean_ram', 'max_ram', 'coherence', 'l2_norm', 'topics',
])
Explanation: Metrics
We'll track these metrics as we train and test NMF on the Wikipedia corpus we created above:
- train time - time to train a model
- mean_ram - mean RAM consumption during training
- max_ram - maximum RAM consumption during training
- train time - time to train a model.
- coherence - coherence score (larger is better).
- l2_norm - L2 norm of v - Wh (less is better, not defined for LDA).
Define a dataframe in which we'll store the recorded metrics:
End of explanation
params = dict(
chunksize=2000,
num_topics=50,
id2word=dictionary,
passes=1,
eval_every=10,
minimum_probability=0,
random_state=42,
)
Explanation: Define common parameters, to be shared by all evaluated models:
End of explanation
row = {}
row['model'] = 'gensim_nmf'
row['train_time'], row['mean_ram'], row['max_ram'], nmf = get_train_time_and_ram(
lambda: GensimNmf(normalize=False, corpus=train_corpus_tfidf, **params),
'gensim_nmf',
1,
)
print(row)
nmf.save('gensim_nmf.model')
nmf = GensimNmf.load('gensim_nmf.model')
row.update(get_metrics(nmf, test_corpus_tfidf))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
Explanation: Wikipedia training
Train Gensim NMF model and record its metrics
End of explanation
row = {}
row['model'] = 'lda'
row['train_time'], row['mean_ram'], row['max_ram'], lda = get_train_time_and_ram(
lambda: LdaModel(corpus=train_corpus, **params),
'lda',
1,
)
print(row)
lda.save('lda.model')
lda = LdaModel.load('lda.model')
row.update(get_metrics(lda, test_corpus))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
Explanation: Train Gensim LDA and record its metrics
End of explanation
row = {}
row['model'] = 'sklearn_nmf'
sklearn_nmf = SklearnNmf(n_components=50, tol=1e-2, random_state=42)
row['train_time'], row['mean_ram'], row['max_ram'], sklearn_nmf = get_train_time_and_ram(
lambda: sklearn_nmf.fit(scipy.sparse.load_npz('wiki_train_csr.npz')),
'sklearn_nmf',
10,
)
print(row)
joblib.dump(sklearn_nmf, 'sklearn_nmf.joblib')
sklearn_nmf = joblib.load('sklearn_nmf.joblib')
row.update(get_metrics(
sklearn_nmf, test_corpus_tfidf, dictionary=dictionary,
))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
Explanation: Train Sklearn NMF and record its metrics
Careful! Sklearn loads the entire input Wikipedia matrix into RAM. Even though the matrix is sparse, you'll need FIXME GB of free RAM to run the cell below.
End of explanation
tm_metrics.replace(np.nan, '-', inplace=True)
tm_metrics.drop(['topics', 'f1'], axis=1)
Explanation: Wikipedia results
End of explanation
def compare_topics(tm_metrics):
for _, row in tm_metrics.iterrows():
print('\n{}:'.format(row.model))
print("\n".join(str(topic) for topic in row.topics))
compare_topics(tm_metrics)
Explanation: Insights
Gensim's online NMF outperforms Sklearn's NMF in terms of speed and RAM consumption:
2x faster.
Uses ~20x less memory.
About 8GB of Sklearn's RAM comes from the in-memory input matrices, which, in contrast to Gensim NMF, cannot be streamed iteratively. But even if we forget about the huge input size, Sklearn NMF uses about 2-8 GB of RAM – significantly more than Gensim NMF or LDA.
L2 norm and coherence are a bit worse.
Compared to Gensim's LDA, Gensim NMF also gives superior results:
3x faster
Coherence is worse than LDA's though.
Learned Wikipedia topics
End of explanation
import logging
import time
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from numpy.random import RandomState
from sklearn import decomposition
from sklearn.cluster import MiniBatchKMeans
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition.nmf import NMF as SklearnNmf
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import f1_score
from sklearn.model_selection import ParameterGrid
import gensim.downloader
from gensim import matutils
from gensim.corpora import Dictionary
from gensim.models import CoherenceModel, LdaModel, LdaMulticore
from gensim.models.nmf import Nmf as GensimNmf
from gensim.parsing.preprocessing import preprocess_string
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from sklearn.base import BaseEstimator, TransformerMixin
import scipy.sparse as sparse
class NmfWrapper(BaseEstimator, TransformerMixin):
def __init__(self, bow_matrix, **kwargs):
self.corpus = sparse.csc.csc_matrix(bow_matrix)
self.nmf = GensimNmf(**kwargs)
def fit(self, X):
self.nmf.update(self.corpus)
@property
def components_(self):
return self.nmf.get_topics()
Explanation: It seems all three models successfully learned useful topics from the Wikipedia corpus.
5. And now for something completely different: Face decomposition from images
The NMF algorithm in Gensim is optimized for extremely large (sparse) text corpora, but it will also work on vectors from other domains!
Let's compare our model to other factorization algorithms on dense image vectors and check out the results.
To do that we'll patch sklearn's Faces Dataset Decomposition.
Sklearn wrapper
Let's create an Scikit-learn wrapper in order to run Gensim NMF on images.
End of explanation
gensim.models.nmf.logger.propagate = False
============================
Faces dataset decompositions
============================
This example applies to :ref:`olivetti_faces` different unsupervised
matrix decomposition (dimension reduction) methods from the module
:py:mod:`sklearn.decomposition` (see the documentation chapter
:ref:`decompositions`) .
print(__doc__)
# Authors: Vlad Niculae, Alexandre Gramfort
# License: BSD 3 claus
n_row, n_col = 2, 3
n_components = n_row * n_col
image_shape = (64, 64)
rng = RandomState(0)
# #############################################################################
# Load faces data
dataset = fetch_olivetti_faces(shuffle=True, random_state=rng)
faces = dataset.data
n_samples, n_features = faces.shape
# global centering
faces_centered = faces - faces.mean(axis=0)
# local centering
faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
def plot_gallery(title, images, n_col=n_col, n_row=n_row):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
plt.suptitle(title, size=16)
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
vmax = max(comp.max(), -comp.min())
plt.imshow(comp.reshape(image_shape), cmap=plt.cm.gray,
interpolation='nearest',
vmin=-vmax, vmax=vmax)
plt.xticks(())
plt.yticks(())
plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.)
# #############################################################################
# List of the different estimators, whether to center and transpose the
# problem, and whether the transformer uses the clustering API.
estimators = [
('Eigenfaces - PCA using randomized SVD',
decomposition.PCA(n_components=n_components, svd_solver='randomized',
whiten=True),
True),
('Non-negative components - NMF (Sklearn)',
decomposition.NMF(n_components=n_components, init='nndsvda', tol=5e-3),
False),
('Non-negative components - NMF (Gensim)',
NmfWrapper(
bow_matrix=faces.T,
chunksize=3,
eval_every=400,
passes=2,
id2word={idx: idx for idx in range(faces.shape[1])},
num_topics=n_components,
minimum_probability=0,
random_state=42,
),
False),
('Independent components - FastICA',
decomposition.FastICA(n_components=n_components, whiten=True),
True),
('Sparse comp. - MiniBatchSparsePCA',
decomposition.MiniBatchSparsePCA(n_components=n_components, alpha=0.8,
n_iter=100, batch_size=3,
random_state=rng),
True),
('MiniBatchDictionaryLearning',
decomposition.MiniBatchDictionaryLearning(n_components=15, alpha=0.1,
n_iter=50, batch_size=3,
random_state=rng),
True),
('Cluster centers - MiniBatchKMeans',
MiniBatchKMeans(n_clusters=n_components, tol=1e-3, batch_size=20,
max_iter=50, random_state=rng),
True),
('Factor Analysis components - FA',
decomposition.FactorAnalysis(n_components=n_components, max_iter=2),
True),
]
# #############################################################################
# Plot a sample of the input data
plot_gallery("First centered Olivetti faces", faces_centered[:n_components])
# #############################################################################
# Do the estimation and plot it
for name, estimator, center in estimators:
print("Extracting the top %d %s..." % (n_components, name))
t0 = time.time()
data = faces
if center:
data = faces_centered
estimator.fit(data)
train_time = (time.time() - t0)
print("done in %0.3fs" % train_time)
if hasattr(estimator, 'cluster_centers_'):
components_ = estimator.cluster_centers_
else:
components_ = estimator.components_
# Plot an image representing the pixelwise variance provided by the
# estimator e.g its noise_variance_ attribute. The Eigenfaces estimator,
# via the PCA decomposition, also provides a scalar noise_variance_
# (the mean of pixelwise variance) that cannot be displayed as an image
# so we skip it.
if (hasattr(estimator, 'noise_variance_') and
estimator.noise_variance_.ndim > 0): # Skip the Eigenfaces case
plot_gallery("Pixelwise variance",
estimator.noise_variance_.reshape(1, -1), n_col=1,
n_row=1)
plot_gallery('%s - Train time %.1fs' % (name, train_time),
components_[:n_components])
plt.show()
Explanation: Modified face decomposition notebook
Adapted from the excellent Scikit-learn tutorial (BSD license):
Turn off the logger due to large number of info messages during training
End of explanation |
8,371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cu-Mg workflows
Goal
Step1: Get your structures
Three ways main ways you'll use to get structures
1. From the Materials Project via the MPRester API
Step2: 2. From a POSCAR file
Step3: 3. From an SQS in prlworkflows
Until the SQS database is complete and queryable, these will have to be created 'by hand' from ATAT lattice.in type files.
To get all of these,
Download ATAT from https
Step4: Run the robust optimization
The robust optimization will do a constrained optimization of your structures.
There are typically three kinds of degrees of freedom we can control
Step5: Mixing robust optimization
Step6: Finding the minimum energy structure
Now that we've completed the robust calculations, we want to find the minimal energy strucure for each robust optimization.
The general steps are
Step7: Get all the structures
Step8: Get the most optimized constrained structure
This is essentially a collection of heuristics on matrix norms and symmetry.
They are not optimized yet, don't trust the tolerances!
Step9: Gibbs energies and unconstrained formation energies
With the minimum energy constrained structures, we want to find
The finite temperature data (Gibbs free energies) for the constrained structure
The unconstrained formation energy
The reason for the finite temperature data is clear, but the unconstrained formation energy may seem less obvious.
For unstable structures we often cannot reasonably reach the true minimum energy and structure (the so called lattice stability, see van de Walle et al. Nat. Commun. 6, 7559 (2015)). This minimum energy lies somewhere between the minimum constrained energy and the unconstrained energy. It's not uncommon for the constrained and unconstrained energies to differ by 10 kJ/mol-atom. Knowing unconstrained energy allows us to have some insight into how precise our constrained energy is compared to the true lattice stability.
Gibbs free energy | Python Code:
from fireworks import LaunchPad
# lpad = LaunchPad.auto_load()
lpad = LaunchPad.from_file('/Users/brandon/.fireworks/my_launchpad.yaml')
Explanation: Cu-Mg workflows
Goal: fully describe the Cu-Mg system with DFT calculations
Phases
There are 5 phases in Cu-Mg that will be described with the following models
Phase | Model
----- | : ----
$\textrm{liquid}$ | $\textrm{(Cu, Mg)}$
$\textrm{fcc A1}$ | $\textrm{(Cu, Mg)}$
$\textrm{hcp A3}$ | $\textrm{(Cu, Mg)}$
$\textrm{Cu}\textrm{Mg}_2$ | $\textrm{(Cu)}\textrm{(Mg)}_2$
$\textrm{Laves C15}$ | $\textrm{(Cu, Mg)}_2 \textrm{(Cu, Mg)}$
Four of these phases are solid phases that can be investigated with DFT.
The others are modeled as mixing phases, and thus the mixing properties must be calculated.
Approach
All compounds, solution endmembers and mixing structures will have their thermodynamic properties calculated via the Debye model.
Currently, the contributions to the Debye model do not include the electronic contribution to the heat capacity.
An electronic DOS can be calculated later and this contribution added.
General imports and setup
Configuration
Before you start anything, you'll have to install prlworkflows.
Right now there is no release version on PyPI or Anaconda, so you need to get clone and install it that way.
Since this code simply adds structures to your LaunchPad, you can run this porition of prlworkflows and atomate on any computer, so long as you set up the databases as in the atomate installation instructions.
Either way, I suggest using pip to install and doing the following in your terminal:
cd ~/work/atomate/codes
git clone https://github.com/phasesresearchlab/prlworkflows
cd prlworkflows
pip install -e .
Finally, you need to do a little configuration on the system(s) you will be adding and running workflows from:
Add - prlworkflows to the list of ADD_USER_PACKAGES in $MY_ATOMATE_INSTALL/config/FW_CONFIG.yaml
Change the force convergence criteria in the drone analysis. If you fail to do this, relaxations that have some residual force will defuse the next Fireworks. If you don't have atomate installed as editiable from git, then you need to
a. Go to your virtual environment directory cd ~/work/atomate/atomate_env
b. vi lib/python2.7/site-packages/atomate/vasp/drones.py
c. Find the VaspDrone.set_analysis method and change the default arguments from
python
def set_analysis(d, max_force_threshold=0.5, volume_change_threshold=0.2):
to
python
def set_analysis(d, max_force_threshold=10, volume_change_threshold=0.2):
Now we can get to running the code.
First, set up the LaunchPad using your own credentials:
End of explanation
from pymatgen import MPRester
with MPRester() as mpr: # provide an API key if you don't have on set up in your .pmgrc.yaml
structure = mpr.get_structure_by_material_id('mp-134')
Explanation: Get your structures
Three ways main ways you'll use to get structures
1. From the Materials Project via the MPRester API
End of explanation
from pymatgen import Structure
structure = Structure.from_file('POSCAR')
Explanation: 2. From a POSCAR file
End of explanation
from prlworkflows.sqs_db import lat_in_to_sqs
my_struct_best_sqs_path = '/Users/brandon/Downloads/atat/data/sqsdb/MGCU2_C15/sqsdb_lev=0_b=1_c=1/bestsqs.out'
with open(my_struct_best_sqs_path) as fp:
lat_in_txt = fp.read()
abstract_sqs = lat_in_to_sqs(lat_in_txt)
structure = abstract_sqs.get_concrete_sqs([['Cu'], ['Cu']])
Explanation: 3. From an SQS in prlworkflows
Until the SQS database is complete and queryable, these will have to be created 'by hand' from ATAT lattice.in type files.
To get all of these,
Download ATAT from https://www.brown.edu/Departments/Engineering/Labs/avdw/atat/
Unzip the archive
Go to the folder atat/data/sqsdb, there will be a list of structures and levels
Pick the phase/level/structure that you want, then get the file path to the bestsqs.out file.
Process the file with lat_in_to_sqs creating an AbstractSQS
Make the AbstractSQS concrete with a sublattice model
End of explanation
from prlworkflows.prl_workflows import get_wf_robust_optimization
metadata = {
'phase': 'LAVES_C15',
'sublattice_model': {
# if you are using an SQS object, you can use these.
# otherwise you should set them by hand so you can query on them later
'configuration': structure.sublattice_configuration, # [['Cu'], ['Cu']]
'occupancies': structure.sublattice_occupancies, # [[1.0], [1.0]]
'site_ratios': structure.sublattice_site_ratios # [1, 2]
},
'output': 'HM_FORM' # To help filter things later. This is formation data, rather than mixing, because it's an endmembmer
}
wf = get_wf_robust_optimization(structure, metadata=metadata, db_file='>>db_file<<', vasp_cmd='>>vasp_cmd<<')
lpad.add_wf(wf)
Explanation: Run the robust optimization
The robust optimization will do a constrained optimization of your structures.
There are typically three kinds of degrees of freedom we can control:
Unit cell volume
Unit cell shape (lattice vectors; length and angles of unit cell)
Ion positions
If we fix any one of these degrees of freedom, we are doing a constrained relaxation, otherwise if all degrees of freedom are considered in a relaxation, it is called an unconstrained relaxation.
Most stable structures that are already close to their minimum energy configuration are typically easy to perform unconstrained relaxations on. All of the structures in the Materials Project are this way.
We would like to find the minimum energy configuration of our endmembers, SQS, and dilute mixing structures. However, many SQS, some endmembers, and some dilute mixing structures are unstable, and the minimum energy structures found in a unconstrtained optimization are not representative of the real structure and energy.
Thus we have developed a series of constrained optimizations to find the lowest energy structure while maintaining the integrity of the structure. It is called the robust optimization workflow, which performs the following steps:
Volume relaxations until the volume has converged
An ionic position relaxation
A shape and ionic position relaxation
The minimum energy structure we will chose is the one that has gone the furthest in the series, but still maintained the symmetry within a tolerance. The volume relaxation always will, but the ionic positions and cell shape + ionic positions relaxations may result in broken structures.
For now these are all run one after another and we have to find the structure that progressed the furthest 'by hand' in code. In the future we will modify the workflow to check that the structure has not broken after step 2 and 3 to find the constrained structure of minimal energy and preserved symmetry.
Note: for now we only run VASP one time in step 2 and 3. It is possible that we would like to run a series of these relaxations as in step 1, to make sure that each relaxation type is fully converged w.r.t. itself, but there is currently no machinery for that. It would have to be implemented either in custodian like the volume relaxation, or it would need to be implemented by adding a detour Firework to do another relaxation if some criteria isn't met.
Formation robust optimization
End of explanation
my_struct_best_sqs_path = '/Users/brandon/Downloads/atat/data/sqsdb/HCP_A3/sqsdb_lev=1_c=0.5,0.5/bestsqs.out'
with open(my_struct_best_sqs_path) as fp:
lat_in_txt = fp.read()
abstract_sqs = lat_in_to_sqs(lat_in_txt)
structure_mixing = abstract_sqs.get_concrete_sqs([['Cu', 'Mg']])
from prlworkflows.prl_workflows import get_wf_robust_optimization
metadata = {
'phase': 'HCP_A3',
'sublattice_model': {
# if you are using an SQS object, you can use these.
# otherwise you should set them by hand so you can query on them later
'configuration': structure_mixing.sublattice_configuration, # [['Cu', 'Mg']]
'occupancies': structure_mixing.sublattice_occupancies, # [[0.5, 0.5]]
'site_ratios': structure_mixing.sublattice_site_ratios # [2]
},
'output': 'HM_MIX' # To help filter things later
}
wf = get_wf_robust_optimization(structure_mixing, metadata=metadata, db_file='>>db_file<<', vasp_cmd='>>vasp_cmd<<')
lpad.add_wf(wf)
Explanation: Mixing robust optimization
End of explanation
from atomate.vasp.database import VaspCalcDb
# create the atomate db from your db.json
PATH_TO_MY_DB_JSON = '/Users/brandon/.fireworks/db.json'
atomate_db = VaspCalcDb.from_db_file(PATH_TO_MY_DB_JSON)
# the unique query to find a structure
structure_query = {'metadata.phase': 'HCP_A3',
'metadata.sublattice_model.configuration': [['Cu']]}
t = atomate_db.db.tasks.find(structure_query)
print('Calculations found : {}'.format(t.count())) # should have found 3 if this calculation is complete
Explanation: Finding the minimum energy structure
Now that we've completed the robust calculations, we want to find the minimal energy strucure for each robust optimization.
The general steps are:
Load the database where calculation results are stored (db.json)
Query for the structure
Check if the ion positions changed significantly from relaxation step 2. If it did not change continue, otherwise the minimum energy structure is the volume relaxed structure from relaxation step 1.
Check if the cell shape and ion positions changed significantly in relaxation step 3. If they did not change, the final structure is our minimum energy structure. If either changed, then our minimum energy structure is the cell shape relaxed (relaxation step 2) structure.
End of explanation
# the oneline dict expressions require Python3
vol_task = atomate_db.db.tasks.find_one({**structure_query, 'orig_inputs.incar.ISIF': 7})
vol_relax_structure = Structure.from_dict(vol_task['output']['structure'])
ions_task = atomate_db.db.tasks.find_one({**structure_query, 'orig_inputs.incar.ISIF': 2})
ions_structure = Structure.from_dict(vol_task['output']['structure'])
shape_ions_task = atomate_db.db.tasks.find_one({**structure_query, 'orig_inputs.incar.ISIF': 4})
shape_ions_structure = Structure.from_dict(vol_task['output']['structure'])
relaxed_structs = [vol_relax_structure, ions_structure, shape_ions_structure]
Explanation: Get all the structures
End of explanation
import numpy as np
def check_ionic_positions(s1, s2, tolerance=0.1):
# this will totally fail if the order of the ions in the structure changed
norm = np.linalg.norm(s1.distance_matrix - s2.distance_matrix)
return norm < tolerance
def check_cell_shape(s1, s2, tolerance=0.1):
m1 = s1.lattice.matrix
m2 = s2.lattice.matrix
norm = np.linalg.norm(m1 - m2)
return norm < tolerance
def check_pure_symmetry(s1, s2, tolerance=0.1):
symm1 = make_pure_struct(s1).get_space_group_info(symprec=tolerance)
symm2 = make_pure_struct(s2).get_space_group_info(symprec=tolerance)
return symm1[1] == symm2[1]
def make_pure_struct(struct):
el0 = struct.types_of_specie[0].name
replacement_dict = {el.name: el0 for el in struct.types_of_specie[1:]}
struct_copy = struct.copy()
struct_copy.replace_species(replacement_dict)
return struct_copy
def find_stable_from_robust(vol_struct, ions_struct, shape_ions_struct):
if not check_ionic_positions(vol_struct, ions_struct):
print('Volume (step 1) is most stable')
return vol_struct
ions = check_ionic_positions(ions_structure, shape_ions_struct)
shape = check_cell_shape(ions_structure, shape_ions_struct)
symmetry = check_pure_symmetry(ions_structure, shape_ions_struct)
if not (ions and shape and symmetry):
print('Ionic position (step 2) is most stable')
return ions_struct
else:
print('Shape and ions (step 3) is most stable')
return shape_ions_struct
stable_struct = find_stable_from_robust(*relaxed_structs)
Explanation: Get the most optimized constrained structure
This is essentially a collection of heuristics on matrix norms and symmetry.
They are not optimized yet, don't trust the tolerances!
End of explanation
from prlworkflows.prl_workflows import wf_gibbs_free_energy
config_dict = {
'OPTIMIZE': False,
'T_MIN': 5,
'T_MAX': 2000,
'T_STEP' 5,
'POISSON': 0.32
}
wf = wf_gibbs_free_energy(stable_struct, config_dict)
Explanation: Gibbs energies and unconstrained formation energies
With the minimum energy constrained structures, we want to find
The finite temperature data (Gibbs free energies) for the constrained structure
The unconstrained formation energy
The reason for the finite temperature data is clear, but the unconstrained formation energy may seem less obvious.
For unstable structures we often cannot reasonably reach the true minimum energy and structure (the so called lattice stability, see van de Walle et al. Nat. Commun. 6, 7559 (2015)). This minimum energy lies somewhere between the minimum constrained energy and the unconstrained energy. It's not uncommon for the constrained and unconstrained energies to differ by 10 kJ/mol-atom. Knowing unconstrained energy allows us to have some insight into how precise our constrained energy is compared to the true lattice stability.
Gibbs free energy
End of explanation |
8,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summary of Data (Part 2)
This post suggests the revised functionalites offered by the summary function described previously in the Summary of Data. The functions are mainly available at the preprocess.py file. We use a simple set of data 'beijing_201802_201803_aq.csv' from KDD CUP of Fresh Air, which provides air qualities measured by the weather stations in Beijing during February, March 2018.
The following mainly presents two functionalities
Step1: The following summary considers the top 1% and bottom 1% of given numeric data as outliers. summary function offers us a simple idea on the type of data and the numeric data distribution.
Step2: User can also set outlier_as_nan=False parameter to display the outliers of data. This setting basically set the non-outliers as NaN (not a number). This offers us the convenience to understand the bigger picture of our available data before we proceed further to uncover additional insights! | Python Code:
# Import functions and load data into a dataframe
import sys
sys.path.append("../")
import pandas as pd
from script.preprocess import summary, warn_missing
kwargs = {"parse_dates": ["utc_time"]}
bj_aq_df = pd.read_csv("beijing_201802_201803_aq.csv", **kwargs)
warn_missing(bj_aq_df, "beijing_201802_201803_aq.csv")
Explanation: Summary of Data (Part 2)
This post suggests the revised functionalites offered by the summary function described previously in the Summary of Data. The functions are mainly available at the preprocess.py file. We use a simple set of data 'beijing_201802_201803_aq.csv' from KDD CUP of Fresh Air, which provides air qualities measured by the weather stations in Beijing during February, March 2018.
The following mainly presents two functionalities:
1. warn_missing - Describe the proportion of missing values
2. summary - Provide the descriptive summary of data distribution. The visualization of data distribution can exclude outliers (filter_outlier=True) specified by a set of quantiles.
To test the execution of functions, plese clone the repository jqlearning and work on the jupyter notebook: Summary-of-data2.ipynb.
End of explanation
summary(bj_aq_df, quantile=(0.01, 0.99), outlier_as_nan=True, filter_outlier=True)
Explanation: The following summary considers the top 1% and bottom 1% of given numeric data as outliers. summary function offers us a simple idea on the type of data and the numeric data distribution.
End of explanation
summary(bj_aq_df, quantile=(0.01, 0.99), outlier_as_nan=False, filter_outlier=True)
Explanation: User can also set outlier_as_nan=False parameter to display the outliers of data. This setting basically set the non-outliers as NaN (not a number). This offers us the convenience to understand the bigger picture of our available data before we proceed further to uncover additional insights!
End of explanation |
8,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cerebral Cortex Data Analysis Algorithms
Cerebral Cortex contains a library of algorithms that are useful for processing data and converting it into features or biomarkers. This page demonstrates a simple GPS clustering algorithm. For more details about the algorithms that are available, please see our documentation. These algorithms are constantly being developed and improved through our own work and the work of other researchers.
Initalize the system
Step1: Generate some sample location data
This example utilizes a data generator to protect the privacy of real participants and allows for anyone utilizing this system to explore the data without required institutional review board approvals. This is disabled for this demonstration to not create too much data at once.
Step2: Cluster the location data
Cerebral Cortex makes it easy to apply built-in algorithms to data streams. In this case, gps_clusters is imported from the algorithm library, then compute is utilized to run this algorithm on the gps_stream to generate a set of centroids. This is the general format for applying algorithm to datastream and makes it easy for researchers to apply validated and tested algorithms to his/her own data without the need to become an expert in the particular set of transformations needed.
Note
Step3: Visualize GPS Data
GPS Stream Plot
GPS visualization requires dedicated plotting capabilities. Cerebral Cortex includes a library to allow for interactive exploration. In this plot, use your mouse to drag the map around along with zooming in to explore the specific data points. | Python Code:
%reload_ext autoreload
from util.dependencies import *
CC = Kernel("/home/jovyan/cc_conf/", study_name="default")
Explanation: Cerebral Cortex Data Analysis Algorithms
Cerebral Cortex contains a library of algorithms that are useful for processing data and converting it into features or biomarkers. This page demonstrates a simple GPS clustering algorithm. For more details about the algorithms that are available, please see our documentation. These algorithms are constantly being developed and improved through our own work and the work of other researchers.
Initalize the system
End of explanation
gps_stream = gen_location_datastream(user_id=USER_ID, stream_name="GPS--org.md2k.phonesensor--PHONE")
gps_stream.printSchema()
gps_stream.show(3)
Explanation: Generate some sample location data
This example utilizes a data generator to protect the privacy of real participants and allows for anyone utilizing this system to explore the data without required institutional review board approvals. This is disabled for this demonstration to not create too much data at once.
End of explanation
from cerebralcortex.algorithms.gps.clustering import cluster_gps
windowed_gps = gps_stream.window()
clusters = cluster_gps(windowed_gps)
clusters.printSchema()
clusters.show(truncate=False)
Explanation: Cluster the location data
Cerebral Cortex makes it easy to apply built-in algorithms to data streams. In this case, gps_clusters is imported from the algorithm library, then compute is utilized to run this algorithm on the gps_stream to generate a set of centroids. This is the general format for applying algorithm to datastream and makes it easy for researchers to apply validated and tested algorithms to his/her own data without the need to become an expert in the particular set of transformations needed.
Note: the compute method engages the parallel computation capabilities of Cerebral Cortex, which causes all the data to be read from the data storage layer and processed on every computational core available to the system. This allows the computation to run as quickly as possible and to take advantage of powerful clusters from a relatively simple interface. This capability is critical to working with mobile sensor big data where data sizes can exceed 100s of gigabytes per datastream for larger studies.
End of explanation
from cerebralcortex.plotting.gps.plots import plot_gps_clusters
plot_gps_clusters(clusters, user_id=USER_ID, zoom=8)
Explanation: Visualize GPS Data
GPS Stream Plot
GPS visualization requires dedicated plotting capabilities. Cerebral Cortex includes a library to allow for interactive exploration. In this plot, use your mouse to drag the map around along with zooming in to explore the specific data points.
End of explanation |
8,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling Competitive Binding
We will model binding of two ligands, one is fluorescent (L), the other competing ligand (A) is not. Kd of both of their binding to protein (P) are known.
Complex concentration and fluorescence of the complex are assumed to be directly related.
Step1: Strictly Competitive Binding Model
$$L + P \underset{k_dL}{\stackrel{k_L}{\rightleftharpoons}} PL$$
$$A + P \underset{k_dA}{\stackrel{k_A}{\rightleftharpoons}} PA$$
Modelling the competition experiment - Expected Binding Curve
P
Step2: Ideally protein concentration should be 10 fold lower then Kd. It must be at least hafl of Kd if fluorescence detection requires higher protein concentration.
For our assay it will be 0.5 uM.
Step3: Ideally ligand concentation should span 100-fold Kd to 0.01-fold Kd, log dilution.
The ligand concentration will be in half log dilution from 20 uM ligand.
Step5: For the assumption that free $[A] = A_{tot}$, it is required that $[A] >> [P]$.
Ligand depletion can cause additional shift of $ K_{dL,app}$
If competitive ligand is not present ($[A] = A_{tot} = 0$). Wel can calculate [PL] as a function of Ptot, Ltot, and Kd as follows
Step6: Without competitive ligand
Step7: In the presence of 10 uM competitive ligand
Step8: In the presence of 50 uM competitive ligand
Step9: Predicting experimental fluorescence signal of saturation binding experiment
Molar fluorescence values based on dansyl amide.
Step10: Fluorescent ligand (L) titration into buffer
Step11: Fluorescent ligand titration into HSA (without competitive ligand)
Step12: Checking ligand depletion
Step13: Fluorescent ligand titration into HSA (with 10 uM competitive ligand)
Step14: Fluorescent ligand titration into HSA (with 50 uM competitive ligand)
Step15: Checking ligand depletion | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from IPython.display import display, Math, Latex #Do we even need this anymore?
%pylab inline
Explanation: Modeling Competitive Binding
We will model binding of two ligands, one is fluorescent (L), the other competing ligand (A) is not. Kd of both of their binding to protein (P) are known.
Complex concentration and fluorescence of the complex are assumed to be directly related.
End of explanation
# Dissociation constant for fluorescent ligand L: K_dL (uM)
Kd_L = 15
Explanation: Strictly Competitive Binding Model
$$L + P \underset{k_dL}{\stackrel{k_L}{\rightleftharpoons}} PL$$
$$A + P \underset{k_dA}{\stackrel{k_A}{\rightleftharpoons}} PA$$
Modelling the competition experiment - Expected Binding Curve
P: protein (HSA)
L: tracer ligand (dansyl amide),
A: non-fluorescent ligand (phenylbutazone)
End of explanation
# Total protein (uM)
Ptot = 0.5
Explanation: Ideally protein concentration should be 10 fold lower then Kd. It must be at least hafl of Kd if fluorescence detection requires higher protein concentration.
For our assay it will be 0.5 uM.
End of explanation
num_wells = 12.0
# (uM)
Lmax = 2000
Lmin = 0.02
# Factor for logarithmic dilution (n)
# Lmax*((1/n)^(11)) = Lmin for 12 wells
n = (Lmax/Lmin)**(1/(num_wells-1))
# Ligand titration series (uM)
Ltot = Lmax / np.array([n**(float(i)) for i in range(12)])
Ltot
# Dissociation constant for non-fluorescent ligand A: K_dA (uM)
Kd_A = 3.85
# Constant concentration of A will be added to all wells (uM)
Atot= 50
Explanation: Ideally ligand concentation should span 100-fold Kd to 0.01-fold Kd, log dilution.
The ligand concentration will be in half log dilution from 20 uM ligand.
End of explanation
#Competitive binding function
def three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A):
Parameters
----------
Ptot : float
Total protein concentration
Ltot : float
Total tracer(fluorescent) ligand concentration
Kd_L : float
Dissociation constant
Atot : float
Total competitive ligand concentration
Kd_A : float
Dissociation constant
Returns
-------
P : float
Free protein concentration
L : float
Free ligand concentration
A : float
Free ligand concentration
PL : float
Complex concentration
Kd_L_app : float
Apparent dissociation constant of L in the presence of A
Usage
-----
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A)
Kd_L_app = Kd_L*(1+Atot/Kd_A)
PL = 0.5 * ((Ptot + Ltot + Kd_L_app) - np.sqrt((Ptot + Ltot + Kd_L_app)**2 - 4*Ptot*Ltot)) # complex concentration (uM)
P = Ptot - PL; # free protein concentration in sample cell after n injections (uM)
L = Ltot - PL; # free tracer ligand concentration in sample cell after n injections (uM)
A = Atot - PL; # free competitive ligand concentration in sample cell after n injections (uM)
return [P, L, A, PL, Kd_L_app]
Explanation: For the assumption that free $[A] = A_{tot}$, it is required that $[A] >> [P]$.
Ligand depletion can cause additional shift of $ K_{dL,app}$
If competitive ligand is not present ($[A] = A_{tot} = 0$). Wel can calculate [PL] as a function of Ptot, Ltot, and Kd as follows:
$$[PL] = \frac{[L][P]}{K_{d} }$$
Then we need to put L and P in terms of Ltot and Ptot, using
$$[L] = [Ltot]-[PL]$$
$$[P] = [Ptot]-[PL]$$
This gives us:
$$[PL] = \frac{([Ltot]-[PL])([Ptot]-[PL])}{K_{d} }$$
Solving this for 0 you get:
$$0 = [PL]^2 - PL + [Ptot][Ltot]$$
Using the quadratic equation:
$$x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$$
where x is $[PL]$, a is 1, $-([Ptot]+[Ltot]+Kd)$ is b, and $[Ptot][Ltot]$ is c. We get as the only reasonable solution:
$$[PL] = \frac{([Ptot] + [Ltot] + K_{d}) - \sqrt{([Ptot] + [Ltot] + K_{d})^2 - 4[Ptot][Ltot]}}{2}$$
In the presence of non-zero concentration of A -a strictly competitive ligand for the same binding site- titration curve will be shifted. Half maximum saturation point gives us apparent dissociation constant for L ( $K_{dL,app}$). This shift is dependent of $[A]$ and $K_{dA}$.
$$K_{dL,app} = K_{dL}(1+\frac{[A]}{K_{dA}})$$
$$[PL] = \frac{([Ptot] + [Ltot] + K_{dL,app}) - \sqrt{([Ptot] + [Ltot] + K_{dL,app})^2 - 4[Ptot][Ltot]}}{2}$$
Based on Hulme EC, Trevethick MA. Ligand binding assays at equilibrium: validation
and interpretation. Br J Pharmacol. 2010;161(6):1219–37. doi: 10.1111/j.1476-5381.2009.00604.x.
End of explanation
# If there is no competitive ligand
Atot=0 #(uM)
[P_A0, L_A0, A_A0, PL_A0, Kd_L_app_A0] = three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A)
plt.semilogx(Ltot,PL_A0, 'o')
plt.xlabel('Ltot')
plt.ylabel('[PL]')
plt.ylim(1e-3,6e-1)
plt.axhline(Ptot,color='0.75',linestyle='--',label='[Ptot]')
plt.axvline(Kd_L,color='k',linestyle='--',label='Kd_L')
plt.axvline(Kd_L_app_A0,color='r',linestyle='--',label='Kd_L_app without A')
plt.legend()
Explanation: Without competitive ligand
End of explanation
# If we change competitor concentration
Atot=10 #(uM)
[P_A10, L_A10, A_A10, PL_A10, Kd_L_app_A10] = three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A)
#plt.subplot(1,2,1)
plt.semilogx(Ltot,PL_A10, 'o')
plt.xlabel('Ltot')
plt.ylabel('[PL]')
plt.ylim(1e-3,6e-1)
plt.axhline(Ptot,color='0.75',linestyle='--',label='[Ptot]')
plt.axvline(Kd_L,color='k',linestyle='--',label='Kd_L')
plt.axvline(Kd_L_app_A10,color='r',linestyle='--',label='Kd_L_app with 10 uM A')
plt.legend()
Explanation: In the presence of 10 uM competitive ligand
End of explanation
# Competitor concentration
Atot=50 #(uM)
[P_A50, L_A50, A_A50, PL_A50, Kd_L_app_A50] = three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A)
# Plotting complex concentration [PL] as a function of fluorescent ligand concentration Ltot¶
plt.semilogx(Ltot,PL_A50, 'o')
plt.xlabel('Ltot')
plt.ylabel('[PL]')
plt.ylim(1e-3,6e-1)
plt.axhline(Ptot,color='0.75',linestyle='--',label='[Ptot]')
plt.axvline(Kd_L,color='k',linestyle='--',label='Kd_L')
plt.axvline(Kd_L_app_A50,color='r',linestyle='--',label='Kd_L_app with 50 uM A')
plt.legend()
Explanation: In the presence of 50 uM competitive ligand
End of explanation
# Background fluorescence
BKG = 86.2
# Molar fluorescence of free ligand
MF = 2.5
# Molar fluorescence of ligand in complex
FR = 306.1
MFC = FR * MF
Explanation: Predicting experimental fluorescence signal of saturation binding experiment
Molar fluorescence values based on dansyl amide.
End of explanation
Ltot
# Fluorescence measurement of buffer + ligand L titrations
L=Ltot
Flu_buffer = MF*L + BKG
Flu_buffer
# y will be complex concentration
# x will be total ligand concentration
plt.semilogx(Ltot,Flu_buffer,'o')
plt.xlabel('[L]')
plt.ylabel('Fluorescence')
plt.ylim(50,8000)
#plt.legend()
Explanation: Fluorescent ligand (L) titration into buffer
End of explanation
Atot=0 #(uM)
[P_A0, L_A0, A_A0, PL_A0, Kd_L_app_A0] = three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A)
# Free ligand concentration in each well (uM)
L=L_A0
#complex concentration
PL=PL_A0
# Fluorescence measurement of the HSA + ligand measurements
Flu_HSA = MF*L + BKG + FR*MF*PL
Flu_HSA
plt.semilogx(Ltot,Flu_buffer,'o',label='buffer')
plt.semilogx(Ltot, Flu_HSA ,'o', label='protein')
plt.xlabel('[L_tot]')
plt.ylabel('Fluorescence')
plt.ylim(50,8000)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.semilogx(Ltot,(Flu_HSA-Flu_buffer),'.', label="difference in fluorescence signal")
plt.xlabel('[L_tot]')
plt.ylabel('Flu_HSA-Flu_buffer')
plt.ylim(0,500)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: Fluorescent ligand titration into HSA (without competitive ligand)
End of explanation
L_percent_depletion=((Ltot-L_A0)/Ltot)*100
plt.semilogx(Ltot,L_percent_depletion,'.',label='L')
plt.xlabel('[L_tot]')
plt.ylabel('% ligand depletion')
plt.ylim(0,100)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: Checking ligand depletion
End of explanation
Atot=10 #(uM)
[P_A10, L_A10, A_A10, PL_A10, Kd_L_app_A10] = three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A)
# Free ligand concentration in each well (uM)
L=L_A10
#complex concentration
PL=PL_A10
# Fluorescence measurement of the HSA + ligand measurements
Flu_HSA = MF*L + BKG + FR*MF*PL
plt.semilogx(Ltot,Flu_buffer,'o',label='buffer')
plt.semilogx(Ltot, Flu_HSA ,'o', label='protein')
plt.xlabel('[L_tot]')
plt.ylabel('Fluorescence')
plt.ylim(50,8000)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.semilogx(Ltot,(Flu_HSA-Flu_buffer),'.',label='difference in fluorescence signal')
plt.xlabel('[L_tot]')
plt.ylabel('Flu_HSA-Flu_buffer')
plt.ylim(0,500)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: Fluorescent ligand titration into HSA (with 10 uM competitive ligand)
End of explanation
Atot=50 #(uM)
[P_A50, L_A50, A_A50, PL_A50, Kd_L_app_A50] = three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A)
# Free ligand concentration in each well (uM)
L=L_A50
#complex concentration
PL=PL_A50
# Fluorescence measurement of the HSA + ligand measurements
Flu_HSA = MF*L + BKG + FR*MF*PL
plt.semilogx(Ltot,Flu_buffer,'o',label='buffer')
plt.semilogx(Ltot, Flu_HSA ,'o', label='protein')
plt.xlabel('[L_tot]')
plt.ylabel('Fluorescence')
plt.ylim(50,8000)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.semilogx(Ltot,(Flu_HSA-Flu_buffer),'.',label='difference in fluorescence signal')
plt.xlabel('[L_tot]')
plt.ylabel('Flu_HSA-Flu_buffer')
plt.ylim(0,500)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: Fluorescent ligand titration into HSA (with 50 uM competitive ligand)
End of explanation
L_percent_depletion=((Ltot-L_A50)/Ltot)*100
plt.semilogx(Ltot,L_percent_depletion,'.',label='L')
plt.xlabel('[L_tot]')
plt.ylabel('% ligand depletion')
plt.ylim(0,100)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: Checking ligand depletion
End of explanation |
8,375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook explores the PHAT v2 artificial star test (AST) results, and how to use them in m31hst.
Step1: Assuming that the Williams et al 2014 Table 6 file was downloaded to the correct location (see the README of m31hst), the AST result catalog can be loaded with the load_phat_ast_table into an astropy table.
Step2: PHAT did AST work in six fields, but unfortunately they don't label what field a star belongs to. This labelling is important because it connects the AST results to measurements of local stellar density. Here I'm showing that we can use K-Means clustering on the stars to reliably label stars into the six AST fields.
Step3: For each AST field, let's plot the median photometric error in the F475W band as a function of luminosity
Step4: We can also plot the completeness (fraction of artificial stars recovered successfully) in the F475W band as a function of luminosity for each AST field
Step5: We provide a PhatAstTable object to provide access to the AST data, sorted by field
Step6: Here we create a crowding file for use in StarFISH from stars in the outer-most (0) AST field | Python Code:
%matplotlib inline
import numpy as np
from sklearn.cluster import KMeans
from astroML.stats import binned_statistic
import matplotlib.pyplot as plt
Explanation: This notebook explores the PHAT v2 artificial star test (AST) results, and how to use them in m31hst.
End of explanation
from m31hst.phatast import load_phat_ast_table
t = load_phat_ast_table()
Explanation: Assuming that the Williams et al 2014 Table 6 file was downloaded to the correct location (see the README of m31hst), the AST result catalog can be loaded with the load_phat_ast_table into an astropy table.
End of explanation
km = KMeans(n_clusters=6)
xy = np.vstack((t['ra'], t['dec'])).T
km.fit(xy)
centers = km.cluster_centers_
print centers
colors = ['c', 'm', 'y', 'k', 'r', 'b']
figure = plt.figure(figsize=(6, 6))
ax = figure.add_subplot(111)
ax.scatter(t['ra'], t['dec'],
marker='.', edgecolors='None', s=1,
c=km.labels_, rasterized=True)
ax.scatter(centers[:, 0], centers[:, 1],
marker='x', s=20, c='None',
edgecolors=colors, zorder=10)
ax.set_xlim(11.6, 10.6)
figure.show()
Explanation: PHAT did AST work in six fields, but unfortunately they don't label what field a star belongs to. This labelling is important because it connects the AST results to measurements of local stellar density. Here I'm showing that we can use K-Means clustering on the stars to reliably label stars into the six AST fields.
End of explanation
f2 = plt.figure(figsize=(6, 6))
ax = f2.add_subplot(111)
for label, c in zip(range(6), colors):
sel = np.where(km.labels_ == label)[0]
tt = t[sel]
diffs = np.abs(tt['f475w_in'] - tt['f475w_out'])
s = np.where(diffs < 10.)[0]
err, edges = binned_statistic(tt['f475w_in'][s],
diffs[s],
statistic='median', bins=50)
ax.scatter(tt['f475w_in'][::2], diffs[::2],
s=1, marker='.', alpha=0.4,
edgecolors='None', facecolors=c)
ax.plot(edges[:-1], err, ls='-', c=c)
ax.set_xlabel(r'$m$')
ax.set_ylabel(r'$\Delta m$')
ax.set_ylim(0, 0.2)
ax.set_xlim(15., 25.)
f2.show()
Explanation: For each AST field, let's plot the median photometric error in the F475W band as a function of luminosity:
End of explanation
f3 = plt.figure(figsize=(6, 6))
ax = f3.add_subplot(111)
n_bins = 80
for label, c in zip(range(6), colors):
sel = np.where(km.labels_ == label)[0]
tt = t[sel]
dropped = np.where(tt['f475w_out'] >= 90.)[0]
drop_flag = np.ones(len(tt), dtype=np.float)
drop_flag[dropped] = 0.
total_recovered, edges = binned_statistic(tt['f475w_in'],
drop_flag,
statistic='sum',
bins=n_bins)
# range=np.array([15., 28.]))
count, edges = binned_statistic(tt['f475w_in'],
drop_flag,
statistic='count',
bins=n_bins)
# range=np.array([15., 28.]))
ax.plot(edges[:-1], total_recovered / count, ls='-', c=c)
ax.set_xlabel(r'$m$')
ax.set_ylabel(r'$\Delta m$')
ax.set_ylim(-0.05, 1.05)
ax.set_xlim(15., 28.)
f3.show()
Explanation: We can also plot the completeness (fraction of artificial stars recovered successfully) in the F475W band as a function of luminosity for each AST field:
End of explanation
from m31hst.phatast import PhatAstTable
tbl = PhatAstTable()
tbl.fields
Explanation: We provide a PhatAstTable object to provide access to the AST data, sorted by field:
End of explanation
tbl.write_crowdfile_for_field("crowdfile.txt", 0)
%%bash
head crowdfile.txt
Explanation: Here we create a crowding file for use in StarFISH from stars in the outer-most (0) AST field:
End of explanation |
8,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initial_t_rad Bug
The purpose of this notebook is to demonstrate the bug associated with setting the initial_t_rad tardis.plasma property.
Step1: Density and Abundance test files
Below are the density and abundance data from the test files used for demonstrating this bug.
Step2: No initial_t_rad
Below we run a simple tardis simulation where initial_t_rad is not set. The simulation has v_inner_boundary = 3350 km/s and v_outer_boundary = 3750 km/s, both within the velocity range in the density file. The simulation runs fine.
Step3: Debugging
Step4: Debugging
Debugging No initial_t_radiative run to compare with Yes initial_t_radiative run
We place two breakpoints
Step5: Debugging Yes initial_t_radiative run
We place the same two breakpoints as above
Step6: Checking model.t_radiative initialization when YES initial_t_rad
In the above debugging blocks, we have identified the following discrepancy INSIDE assemble_plasma()
Step7: Checking self._t_radiative initialization when NO initial_t_rad at line 108
IMPORTANT
Step8: CODE CHANGE | Python Code:
pwd
import tardis
import numpy as np
Explanation: Initial_t_rad Bug
The purpose of this notebook is to demonstrate the bug associated with setting the initial_t_rad tardis.plasma property.
End of explanation
density_dat = np.loadtxt('data/density.txt',skiprows=1)
abund_dat = np.loadtxt('data/abund.dat', skiprows=1)
print(density_dat)
print(abund_dat)
Explanation: Density and Abundance test files
Below are the density and abundance data from the test files used for demonstrating this bug.
End of explanation
no_init_trad = tardis.run_tardis('data/config_no_init_trad.yml')
no_init_trad.model.velocity
no_init_trad.model.no_of_shells, no_init_trad.model.no_of_raw_shells
print('raw velocity: \n',no_init_trad.model.raw_velocity)
print('raw velocity shape: ',no_init_trad.model.raw_velocity.shape)
print('(v_boundary_inner, v_boundary_outer) = (%i, %i)'%
(no_init_trad.model.v_boundary_inner.to('km/s').value, no_init_trad.model.v_boundary_outer.to('km/s').value))
print('v_boundary_inner_index: ', no_init_trad.model.v_boundary_inner_index)
print('v_boundary_outer_index: ', no_init_trad.model.v_boundary_outer_index)
print('t_rad', no_init_trad.model.t_rad)
Explanation: No initial_t_rad
Below we run a simple tardis simulation where initial_t_rad is not set. The simulation has v_inner_boundary = 3350 km/s and v_outer_boundary = 3750 km/s, both within the velocity range in the density file. The simulation runs fine.
End of explanation
%%debug
init_trad = tardis.run_tardis('data/config_init_trad.yml')
init_trad = tardis.run_tardis('data/config_init_trad.yml')
Explanation: Debugging
End of explanation
%%debug
no_init_trad = tardis.run_tardis('config_no_init_trad.yml')
Explanation: Debugging
Debugging No initial_t_radiative run to compare with Yes initial_t_radiative run
We place two breakpoints:
break 1. tardis/base:37 --> Stops in the run_tardis() function when the simulation is initialized.
break 2. tardis/simulation/base:436 --> Stops after the Radial1DModel has been built from the config file, but before the plasma has been initialized.
IMPORTANT:
We check the model.t_radiative property INSIDE the assemble_plasma function. Notice that it has len(model.t_radiative) = model.no_of_shells = 5
End of explanation
%%debug
init_trad = tardis.run_tardis('config_init_trad.yml')
Explanation: Debugging Yes initial_t_radiative run
We place the same two breakpoints as above:
break 1. tardis/base:37 --> Stops in the run_tardis() function when the simulation is initialized.
break 2. tardis/simulation/base:436 --> Stops after the Radial1DModel has been built from the config file, but before the plasma has been initialized.
IMPORTANT:
We check the model.t_radiative property INSIDE the assemble_plasma function. Notice that it has len(model.t_radiative) = 6 which is NOT EQUAL to model.no_of_shells = 5
End of explanation
%%debug
init_trad = tardis.run_tardis('config_init_trad.yml')
Explanation: Checking model.t_radiative initialization when YES initial_t_rad
In the above debugging blocks, we have identified the following discrepancy INSIDE assemble_plasma():
len(model.t_radiative) = 6 when YES initial_t_rad
len(model.t_radiative) = 5 when NO initial_t_rad
Therefore, we investigate in the following debugging block how model.t_radiative is initialized. We place a breakpoint at tardis/simulation/base:432 and step INSIDE the Radial1DModel initialization.
Breakpoints:
break 1. tardis/simulation/base:432 --> Stops so that we can step INSIDE Radial1DModel initialization from_config().
break 2. tardis/model/base:330 --> Where temperature is handled INSIDE Radial1DModel initialization from_config().
break 3. tardis/model/base:337 --> t_radiative is initialized. It has the same length as velocity which is the raw velocities from the density file.
break 4. tardis/model/base:374 --> init() for Radial1DModel is called. We check values of relevant variables.
break 5. tardis/model/base:76 --> Stops at first line of Radial1DModel init() function.
break 6. tardis/model/base:101 --> self._t_radiative is set.
break 7. tardis/model/base:140 --> Stops at first line of self.t_radiative setter.
break 8. tardis/model/base:132 --> Stops at first line of self.t_radiative getter.
break 9. tardis/model/base:108 --> Stop right after self._t_radiative is set. NOTICE that neither the setter nor the getter was called. IMPORTANT: at line 108, we have len(self._t_radiative) = 10. TO DO: Check len(self._t_radiative) at line 108 in the NO initial_t_rad case.
End of explanation
%%debug
no_init_trad = tardis.run_tardis('config_no_init_trad.yml')
Explanation: Checking self._t_radiative initialization when NO initial_t_rad at line 108
IMPORTANT: We find that len(self._t_radiative) = 5. This is a DISCREPANCY with the YES initial_t_rad case.
End of explanation
init_trad = tardis.run_tardis('config_init_trad.yml')
import numpy as np
a = np.array([1,2,3,4,5,6,7,8])
a[3:8]
a
2 in a
np.argwhere(a==6)[0][0]
np.searchsorted(a, 6.5)
if (2 in a) and (3.5 in a):
print('hi')
assert 1==1.2, "test"
a[3:6]
Explanation: CODE CHANGE:
We propose the following change to tardis/model/base:106
Line 106 Before Change: self._t_radiative = t_radiative
Line 106 After Change: self._t_radiative = t_radiative[1:1 + self.no_of_shells]
t_radiative[0] corresponds to the temperature within the inner boundary, and so should be ignored.
End of explanation |
8,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A socket is one endpoint of a communication channel used by programs to pass data back and forth locally or across the Internet. Sockets have two primary properties controlling the way they send data
Step1: Use gethostbyname() to consult the operating system hostname resolution API and convert the name of a server to its numerical address.
Step2: For access to more naming information about a server, use gethostbyname_ex(). It returns the canonical hostname of the server, any aliases, and all of the available IP addresses that can be used to reach it.
Step3: Use getfqdn() to convert a partial name to a fully qualified domain name.
Step4: When the address of a server is available, use gethostbyaddr() to do a “reverse” lookup for the name.
Step5: Finding Service Information
In addition to an IP address, each socket address includes an integer port number. Many applications can run on the same host, listening on a single IP address, but only one socket at a time can use a port at that address. The combination of IP address, protocol, and port number uniquely identify a communication channel and ensure that messages sent through a socket arrive at the correct destination.
Some of the port numbers are pre-allocated for a specific protocol. For example, communication between email servers using SMTP occurs over port number 25 using TCP, and web clients and servers use port 80 for HTTP. The port numbers for network services with standardized names can be looked up with getservbyname().
Step6: To reverse the service port lookup, use getservbyport().
Step8: The number assigned to a transport protocol can be retrieved with getprotobyname().
Step10: Looking Up Server Addresses
getaddrinfo() converts the basic address of a service into a list of tuples with all of the information necessary to make a connection. The contents of each tuple will vary, containing different network families or protocols.
Step11: IP Address Representations
Network programs written in C use the data type struct sockaddr to represent IP addresses as binary values (instead of the string addresses usually found in Python programs). To convert IPv4 addresses between the Python representation and the C representation, use inet_aton() and inet_ntoa().
Step12: The four bytes in the packed format can be passed to C libraries, transmitted safely over the network, or saved to a database compactly.
The related functions inet_pton() and inet_ntop() work with both IPv4 and IPv6 addresses, producing the appropriate format based on the address family parameter passed in. | Python Code:
import socket
print(socket.gethostname())
Explanation: A socket is one endpoint of a communication channel used by programs to pass data back and forth locally or across the Internet. Sockets have two primary properties controlling the way they send data: the address family controls the OSI network layer protocol used and the socket type controls the transport layer protocol.
Python supports three address families. The most common, AF_INET, is used for IPv4 Internet addressing. IPv4 addresses are four bytes long and are usually represented as a sequence of four numbers, one per octet, separated by dots (e.g., 10.1.1.5 and 127.0.0.1). These values are more commonly referred to as “IP addresses.” Almost all Internet networking is done using IP version 4 at this time.
AF_INET6 is used for IPv6 Internet addressing. IPv6 is the “next generation” version of the Internet protocol, and supports 128-bit addresses, traffic shaping, and routing features not available under IPv4. Adoption of IPv6 continues to grow, especially with the proliferation of cloud computing and the extra devices being added to the network because of Internet-of-things projects.
AF_UNIX is the address family for Unix Domain Sockets (UDS), an inter-process communication protocol available on POSIX-compliant systems. The implementation of UDS typically allows the operating system to pass data directly from process to process, without going through the network stack. This is more efficient than using AF_INET, but because the file system is used as the namespace for addressing, UDS is restricted to processes on the same system. The appeal of using UDS over other IPC mechanisms such as named pipes or shared memory is that the programming interface is the same as for IP networking, so the application can take advantage of efficient communication when running on a single host, but use the same code when sending data across the network.
Note:
THE AF_UNIX constant is only defined on systems where UDS is supported.
The socket type is usually either SOCK_DGRAM for message-oriented datagram transport or SOCK_STREAM for stream-oriented transport. Datagram sockets are most often associated with UDP, the user datagram protocol. They provide unreliable delivery of individual messages. Stream-oriented sockets are associated with TCP, transmission control protocol. They provide byte streams between the client and server, ensuring message delivery or failure notification through timeout management, retransmission, and other features.
Most application protocols that deliver a large amount of data, such as HTTP, are built on top of TCP because it makes it simpler to create complex applications when message ordering and delivery is handled automatically. UDP is commonly used for protocols where order is less important (since the messages are self-contained and often small, such as name look-ups via DNS), or for multicasting (sending the same data to several hosts). Both UDP and TCP can be used with either IPv4 or IPv6 addressing.
Looking up Hosts on the Network
socket includes functions to interface with the domain name services on the network so a program can convert the host name of a server into its numerical network address. Applications do not need to convert addresses explicitly before using them to connect to a server, but it can be useful when reporting errors to include the numerical address as well as the name value being used.
To find the official name of the current host, use gethostname()
End of explanation
import socket
HOSTS = [
'apu',
'pymotw.com',
'www.python.org',
'nosuchname',
]
for host in HOSTS:
try:
print('{} : {}'.format(host, socket.gethostbyname(host)))
except socket.error as msg:
print('{} : {}'.format(host, msg))
Explanation: Use gethostbyname() to consult the operating system hostname resolution API and convert the name of a server to its numerical address.
End of explanation
import socket
HOSTS = [
'apu',
'pymotw.com',
'www.python.org',
'nosuchname',
]
for host in HOSTS:
print(host)
try:
name, aliases, addresses = socket.gethostbyname_ex(host)
print(' Hostname:', name)
print(' Aliases :', aliases)
print(' Addresses:', addresses)
except socket.error as msg:
print('ERROR:', msg)
print()
Explanation: For access to more naming information about a server, use gethostbyname_ex(). It returns the canonical hostname of the server, any aliases, and all of the available IP addresses that can be used to reach it.
End of explanation
import socket
for host in ['scott-t460', 'pymotw.com']:
print('{:>10} : {}'.format(host, socket.getfqdn(host)))
Explanation: Use getfqdn() to convert a partial name to a fully qualified domain name.
End of explanation
import socket
hostname, aliases, addresses = socket.gethostbyaddr('10.104.190.53')
print('Hostname :', hostname)
print('Aliases :', aliases)
print('Addresses:', addresses)
Explanation: When the address of a server is available, use gethostbyaddr() to do a “reverse” lookup for the name.
End of explanation
import socket
from urllib.parse import urlparse
URLS = [
'http://www.python.org',
'https://www.mybank.com',
'ftp://prep.ai.mit.edu',
'gopher://gopher.micro.umn.edu',
'smtp://mail.example.com',
'imap://mail.example.com',
'imaps://mail.example.com',
'pop3://pop.example.com',
'pop3s://pop.example.com',
]
for url in URLS:
parsed_url = urlparse(url)
port = socket.getservbyname(parsed_url.scheme)
print('{:>6} : {}'.format(parsed_url.scheme, port))
Explanation: Finding Service Information
In addition to an IP address, each socket address includes an integer port number. Many applications can run on the same host, listening on a single IP address, but only one socket at a time can use a port at that address. The combination of IP address, protocol, and port number uniquely identify a communication channel and ensure that messages sent through a socket arrive at the correct destination.
Some of the port numbers are pre-allocated for a specific protocol. For example, communication between email servers using SMTP occurs over port number 25 using TCP, and web clients and servers use port 80 for HTTP. The port numbers for network services with standardized names can be looked up with getservbyname().
End of explanation
import socket
from urllib.parse import urlunparse
for port in [80, 443, 21, 70, 25, 143, 993, 110, 995]:
url = '{}://example.com/'.format(socket.getservbyport(port))
print(url)
Explanation: To reverse the service port lookup, use getservbyport().
End of explanation
import socket
def get_constants(prefix):
Create a dictionary mapping socket module
constants to their names.
return {
getattr(socket, n): n
for n in dir(socket)
if n.startswith(prefix)
}
protocols = get_constants('IPPROTO_')
for name in ['icmp', 'udp', 'tcp']:
proto_num = socket.getprotobyname(name)
const_name = protocols[proto_num]
print('{:>4} -> {:2d} (socket.{:<12} = {:2d})'.format(
name, proto_num, const_name,
getattr(socket, const_name)))
Explanation: The number assigned to a transport protocol can be retrieved with getprotobyname().
End of explanation
import socket
def get_constants(prefix):
Create a dictionary mapping socket module
constants to their names.
return {
getattr(socket, n): n
for n in dir(socket)
if n.startswith(prefix)
}
families = get_constants('AF_')
types = get_constants('SOCK_')
protocols = get_constants('IPPROTO_')
for response in socket.getaddrinfo('www.python.org', 'http'):
# Unpack the response tuple
family, socktype, proto, canonname, sockaddr = response
print('Family :', families[family])
print('Type :', types[socktype])
print('Protocol :', protocols[proto])
print('Canonical name:', canonname)
print('Socket address:', sockaddr)
print()
Explanation: Looking Up Server Addresses
getaddrinfo() converts the basic address of a service into a list of tuples with all of the information necessary to make a connection. The contents of each tuple will vary, containing different network families or protocols.
End of explanation
import binascii
import socket
import struct
import sys
for string_address in ['192.168.1.1', '127.0.0.1']:
packed = socket.inet_aton(string_address)
print('Original:', string_address)
print('Packed :', binascii.hexlify(packed))
print('Unpacked:', socket.inet_ntoa(packed))
print()
Explanation: IP Address Representations
Network programs written in C use the data type struct sockaddr to represent IP addresses as binary values (instead of the string addresses usually found in Python programs). To convert IPv4 addresses between the Python representation and the C representation, use inet_aton() and inet_ntoa().
End of explanation
import binascii
import socket
import struct
import sys
string_address = '2002:ac10:10a:1234:21e:52ff:fe74:40e'
packed = socket.inet_pton(socket.AF_INET6, string_address)
print('Original:', string_address)
print('Packed :', binascii.hexlify(packed))
print('Unpacked:', socket.inet_ntop(socket.AF_INET6, packed))
Explanation: The four bytes in the packed format can be passed to C libraries, transmitted safely over the network, or saved to a database compactly.
The related functions inet_pton() and inet_ntop() work with both IPv4 and IPv6 addresses, producing the appropriate format based on the address family parameter passed in.
End of explanation |
8,378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python 2.7 compatibility
To achieve Python 2.7 compatibility we will import the "_winreg" module
from six.moves, since it has been renamed to winreg in Python 3.
Step1: The relevant keys in the registry may not exist at all. This will through a
WindowsError on Python 2.7 and a FileNotFoundError on Python 3. Hence, for
Python 2.7 we introduce the Python 3 exception name FileNotFoundError.
Step2: All Proxy relevant keys are hold by a set of constants.
Step12: Proxy setting class
All access to the registry and database of proxysettings, as well as the
application programming interface should be bound to one class calles
ProxySetting.
There are four entries in the registry, that are used to configure the proxy
settings | Python Code:
import re, six
from six.moves import winreg
Explanation: Python 2.7 compatibility
To achieve Python 2.7 compatibility we will import the "_winreg" module
from six.moves, since it has been renamed to winreg in Python 3.
End of explanation
if six.PY2:
FileNotFoundError = WindowsError
Explanation: The relevant keys in the registry may not exist at all. This will through a
WindowsError on Python 2.7 and a FileNotFoundError on Python 3. Hence, for
Python 2.7 we introduce the Python 3 exception name FileNotFoundError.
End of explanation
# Registry constants
_ROOT = winreg.HKEY_CURRENT_USER
_BASEKEY = 'Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings'
_ACCESS = winreg.KEY_ALL_ACCESS
_PROXYENABLE = 'ProxyEnable'
_PROXYHTTP11 = 'ProxyHttp1.1'
_PROXYSERVER = 'ProxyServer'
_PROXYOVERRIDE = 'ProxyOverride'
_SUBKEYS = [
_PROXYENABLE,
_PROXYHTTP11,
_PROXYSERVER,
_PROXYOVERRIDE,
]
Explanation: All Proxy relevant keys are hold by a set of constants.
End of explanation
class ProxySetting(object):
def __init__(self):
# Internal state (default empty, disabled setting)
self._name = None
self._set_defaults()
def __repr__(self):
if not self.enable:
return u'<Proxy Disabled>'
else:
return u"<Proxy '{0}'>".format(self._server[0])
def __getitem__(self, key):
return self._registry[key][0]
def __setitem__(self, key, value):
v, t = self._registry[key]
if key in [_PROXYENABLE, _PROXYHTTP11]:
if not isinstance(value, int) or value not in [0, 1]:
raise Exception('Wrong type or value')
self._registry[key] = (value, t)
elif key in [_PROXYSERVER, _PROXYOVERRIDE]:
if not isinstance(value, six.string_types):
raise Exception('Wrong type or value')
self._registry[key] = (value, t)
else:
raise Exception('Could not set value')
def _set_defaults(self):
self._registry = {
_PROXYENABLE: (0, 4),
_PROXYHTTP11: (1, 4),
_PROXYSERVER: ('', 1),
_PROXYOVERRIDE: ('', 1)
}
def registry_read(self):
Read values from registry
proxykey = winreg.OpenKey(_ROOT, _BASEKEY, 0, _ACCESS)
self._set_defaults()
# If any value is not available in the registry, we fall back to the defaults
for subkey in _SUBKEYS:
try:
# This will return (value, type) tuples, that are stored for each subkey
self._registry[subkey] = winreg.QueryValueEx(proxykey, subkey)
except FileNotFoundError:
pass
winreg.CloseKey(proxykey)
# Normalize ProxyOverride to semicolon separated list
self.override = self.override
def registry_write(self):
Write values to registry
proxykey = winreg.OpenKey(_ROOT, _BASEKEY, 0, _ACCESS)
for subkey in _SUBKEYS:
value, regtype = self._registry[subkey]
winreg.SetValueEx(proxykey, subkey, 0, regtype, value)
winreg.CloseKey(proxykey)
@property
def enable(self):
Proxy enable status
return self[_PROXYENABLE] == 1
@enable.setter
def enable(self, on):
Set enable value from a boolean value
if on:
self[_PROXYENABLE] = 1
else:
self[_PROXYENABLE] = 0
@property
def http11(self):
Proxy http1.1 status
return self[_PROXYHTTP11] == 1
@http11.setter
def http11(self, on):
if on:
self[_PROXYHTTP11] = 1
else:
self[_PROXYHTTP11] = 0
@property
def server(self):
Return the proxy server(s).
If individual proxy servers are set, then a dictionary
mapping protocol to proxy:port is returned, e.g.:
dict(http='192.168.0.1:8000',
https='192.168.0.1:8001')
If only one proxy is used for all protocols, then a
dictionary of the form:
dict(all='192.168.0.1:8000')
is returned.
# If protocol specific proxy settings are used, these are
# assigned to the protocol names with the '=' sign
proxyserver = self[_PROXYSERVER]
if proxyserver.find('=') >= 0:
servers = proxyserver.split(';')
servers = dict(map(lambda p: p.split('='), servers))
else:
servers = dict(all=proxyserver)
return servers
@server.setter
def server(self, proxies):
Set the proxy servers
If proxies is a string, it will be assigned as the proxy server
setting directly, e.g.
>>> p = ProxySetting()
>>> p.server = '192.168.0.1:8000'
>>> p.server
{'all': '192.168.0.1:8000'}
>>> p.server = 'http=192.168.0.1:8000;https=192.168.0.1:8001;ftp=192.168.0.1:8002;socks=192.168.0.1:8004'
>>> p.server
{'ftp': '192.168.0.1:8002',
'http': '192.168.0.1:8000',
'https': '192.168.0.1:8001',
'socks': '192.168.0.1:8004'}
If the proxies parameter is a dictionary, the individual entries will be used.
Allowed keys are 'http', 'https', 'ftp', and 'socks' - or - 'all'.
If a key 'all' is provided, it will take precedence. Example:
>>> p.server = dict(all='192.168.0.1:8000')
>>> p.server
{'all': '192.168.0.1:8000'}
if isinstance(proxies, six.string_types):
# TODO: Check if string is valid
self[_PROXYSERVER] = proxies
elif isinstance(proxies, dict):
# Check for 'all' first
if 'all' in proxies:
# TODO: Check value
self[_PROXYSERVER] = proxies['all']
else:
# TODO: Check validity of dict
http = proxies.get('http', None)
https = proxies.get('https', None)
ftp = proxies.get('ftp', None)
socks = proxies.get('socks', None)
proxy_list = []
if http:
proxy_list.append('http={0}'.format(http))
if https:
proxy_list.append('https={0}'.format(https))
if ftp:
proxy_list.append('ftp={0}'.format(ftp))
if socks:
proxy_list.append('socks={0}'.format(socks))
# This one even works with the empty list
self[_PROXYSERVER] = ';'.join(proxy_list)
else:
# TODO: Provide Exception-classes
raise Exception('Wrong proxy type')
@property
def override(self):
Return a list of all proxy exceptions
return [e.strip() for e in re.split(';|,', self[_PROXYOVERRIDE]) if e.strip() != '']
@override.setter
def override(self, overridelist):
Set the override value from a list of proxy exceptions
# TODO: Add some check on validity of input
self[_PROXYOVERRIDE] = ';'.join(overridelist)
Explanation: Proxy setting class
All access to the registry and database of proxysettings, as well as the
application programming interface should be bound to one class calles
ProxySetting.
There are four entries in the registry, that are used to configure the proxy
settings:
ProxyEnable (Proxy enabled or disabled)
ProxyHttp11 (Proxy use HTTP 1.1 enable or disable)
ProxyServer (String of all proxy servers and ports for all protocols)
ProxyOverride (String of all proxy exceptions)
As seen from the Python API these will be represented as Python types, i.e.
ProxyEnable, a boolean
ProxyHttp11, a boolean
ProxyServer, a string 'server:port' if only one proxy is to be used or a
dictionary mapping protocol to 'server:port' setting
ProxyOverride, a list of proxy exceptions
End of explanation |
8,379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Links zu Dokumentationen/Tutorials für IPython/Python/numpy/matplotlib/git sowie die Sourcodes findet ihr im GitHub Repo.
Step1: Modellierung mit Newtonschem Gesetz
Step2: $H(q(t), p(t))$ ist konstant, weil die Energieerhaltung gelten muss | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Links zu Dokumentationen/Tutorials für IPython/Python/numpy/matplotlib/git sowie die Sourcodes findet ihr im GitHub Repo.
End of explanation
values = np.loadtxt('values')
alpha = values[:,0]
alpha_dot = values[:,1]
plt.plot(alpha)
plt.ylabel(r'$\alpha(t)$')
plt.xlabel(r'$t$')
plt.grid()
plt.plot(alpha[:100])
plt.ylabel(r'$\alpha(t)$')
plt.xlabel(r'$t$')
plt.grid()
q = alpha
p = alpha_dot
H = - np.cos(q) + p**2 / 2
Explanation: Modellierung mit Newtonschem Gesetz:
Die tangentiale Kraft $F_{tan}(t) = m \; g \; sin(\alpha(t))$ wirkt entgegen der Rückstellkraft des Pendels:
$F_{R}(t) = -F_{tan}(t) = m \; \ddot \alpha(t) \; l$.
Insgesamt ergibt sich als DGL 2. Ordnung:
$- m \; g \; sin(\alpha(t)) = m \; \ddot \alpha(t) \; l$.
Umgeformt auf ein DGL-System 1. Ordnung:
$\begin{eqnarray}
\dot \alpha(t) &=& x(t) \
\dot x(t) &=& - \frac{g}{l} sin(\alpha(t))
\end{eqnarray}$
Modellierung mit Hamiltonschem Prinzip:
Gegeben ist die Energiefunktion: $H(p,q) = -m g l \; cos(q) + \frac{1}{2 m l^{2}} p^{2}$.
Mit den Hamiltonschen Bewegungsgleichungen ergibt sich:
$\begin{eqnarray}
\dot q &=& \frac{\partial H}{\partial p} &=& p \frac{1}{m l^{2}} \
\dot p &=& - \frac{\partial H}{\partial q} &=& - m g l \; sin(q)
\end{eqnarray}$
Berechnung
Die Iteration ist mit C++ geschrieben aufgabe3.cpp. Die Werte sind in der Datei values gespeichert.
End of explanation
plt.plot(H)
plt.ylabel(r'$H(p(t),q(t))$')
plt.xlabel(r'$t$')
plt.axis([0,10000, -1, 0])
plt.grid()
plt.plot(q, p)
plt.ylabel(r'$p(t)$')
plt.xlabel(r'$q(t)$')
plt.grid()
Explanation: $H(q(t), p(t))$ ist konstant, weil die Energieerhaltung gelten muss: $\frac{d}{dt} H(p(t),q(t)) = 0$
End of explanation |
8,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitzhugh-Nagumo simplified action-potential model
This example shows how the Fitzhugh-Nagumo simplified action potential (AP) model can be used.
The model is based on a simplification and state-reduction of the original squid axon model by Hodgkind and Huxley.
It has two state variables, a voltage-like variable and a recovery variable.
Step1: With these parameters, the model creates wide AP waveforms that are more reminiscent of muscle cells than neurons.
We now set up a simple optimisation problem with the model.
Step2: Next, we set up a problem. Because this model has multiple outputs (2), we use a MultiOutputProblem.
Step3: Finally, we choose a wide set of boundaries and run!
Step4: This shows the parameters are not retrieved entirely correctly, but the traces still strongly overlap.
Sampling with Monomial-gamma HMC
The Fitzhugh-Nagumo model has sensitivities calculated by the forward sensitivities approach, so we can use samplers that use gradients (although they will be slower per iteration; although perhaps not by ESS per second!), like Monomial-gamma HMC.
Step5: Print results.
Step6: Plot the few posterior predictive simulations versus data. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pints
import pints.toy
# Create a model
model = pints.toy.FitzhughNagumoModel()
# Run a simulation
parameters = [0.1, 0.5, 3]
times = np.linspace(0, 20, 200)
values = model.simulate(parameters, times)
# Plot the results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Value')
plt.plot(times, values)
plt.legend(['Voltage', 'Recovery'])
plt.show()
Explanation: Fitzhugh-Nagumo simplified action-potential model
This example shows how the Fitzhugh-Nagumo simplified action potential (AP) model can be used.
The model is based on a simplification and state-reduction of the original squid axon model by Hodgkind and Huxley.
It has two state variables, a voltage-like variable and a recovery variable.
End of explanation
# First add some noise
sigma = 0.5
noisy = values + np.random.normal(0, sigma, values.shape)
# Plot the results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Noisy values')
plt.plot(times, noisy)
plt.show()
Explanation: With these parameters, the model creates wide AP waveforms that are more reminiscent of muscle cells than neurons.
We now set up a simple optimisation problem with the model.
End of explanation
problem = pints.MultiOutputProblem(model, times, noisy)
score = pints.SumOfSquaresError(problem)
Explanation: Next, we set up a problem. Because this model has multiple outputs (2), we use a MultiOutputProblem.
End of explanation
# Select boundaries
boundaries = pints.RectangularBoundaries([0., 0., 0.], [10., 10., 10.])
# Select a starting point
x0 = [1, 1, 1]
# Perform an optimization
found_parameters, found_value = pints.optimise(score, x0, boundaries=boundaries)
print('Score at true solution:')
print(score(parameters))
print('Found solution: True parameters:' )
for k, x in enumerate(found_parameters):
print(pints.strfloat(x) + ' ' + pints.strfloat(parameters[k]))
# Plot the results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Values')
plt.plot(times, noisy, '-', alpha=0.25, label='noisy signal')
plt.plot(times, values, alpha=0.4, lw=5, label='original signal')
plt.plot(times, problem.evaluate(found_parameters), 'k--', label='recovered signal')
plt.legend()
plt.show()
Explanation: Finally, we choose a wide set of boundaries and run!
End of explanation
problem = pints.MultiOutputProblem(model, times, noisy)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0, 0, 0, 0, 0],
[10, 10, 10, 20, 20]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
real_parameters1 = np.array(parameters + [sigma, sigma])
xs = [
real_parameters1 * 1.1,
real_parameters1 * 0.9,
real_parameters1 * 1.15,
real_parameters1 * 1.5,
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 4, xs, method=pints.MonomialGammaHamiltonianMCMC)
# Add stopping criterion
mcmc.set_max_iterations(200)
mcmc.set_log_interval(1)
# Run in parallel
mcmc.set_parallel(True)
for sampler in mcmc.samplers():
sampler.set_leapfrog_step_size([0.05, 0.2, 0.2, 0.1, 0.1])
sampler.set_leapfrog_steps(10)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
import pints.plot
pints.plot.trace(chains)
plt.show()
Explanation: This shows the parameters are not retrieved entirely correctly, but the traces still strongly overlap.
Sampling with Monomial-gamma HMC
The Fitzhugh-Nagumo model has sensitivities calculated by the forward sensitivities approach, so we can use samplers that use gradients (although they will be slower per iteration; although perhaps not by ESS per second!), like Monomial-gamma HMC.
End of explanation
results = pints.MCMCSummary(
chains=chains,
time=mcmc.time(),
parameter_names=['a', 'b', 'c', 'sigma_V', 'sigma_R'],
)
print(results)
Explanation: Print results.
End of explanation
import pints.plot
pints.plot.series(np.vstack(chains), problem)
plt.show()
Explanation: Plot the few posterior predictive simulations versus data.
End of explanation |
8,381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PraatIO - doing speech analysis with Python
An introduction and tutorial
<hr>
TABLE OF CONTENTS
An introduction
- <a href="#what_is_praat">What is Praat?</a>
- <a href="#textgrids_and_tiers">TextGrids, IntervalTiers, and PointTiers</a>
- <a href="#physical_textgrids">The physical textgrid</a>
- <a href="#what_is_praatio">What is PraatIO</a>
- <a href="#uses_of_praatio">What are some uses of PraatIO?</a>
A tutorial
- <a href="#installing_praatio">Installing PraatIO</a>
- <a href="#creating_bare">Creating bare textgrid files</a>
- <a href="#example_create_blank_textgrids">Example
Step1: <hr>
<a id="creating_bare"></a>
Creating bare textgrid files
The code for working with textgrids is contained in textgrid.py. This file does not store the classes we will be working with, but it does expose them for convenient access. There are three classes of particular interest and some functions. Let's start with the basics.
TextGrid, IntervalTier, and PointTier are the three classes we'll be using a lot in textgrid.py
For our first, most basic example, we create a TextGrid with a blank IntervalTier and a blank PointTier--ripe for annotating. A minimal Textgrid needs at least one tier but can have as many as you want.
Step2: <a id="example_create_blank_textgrids">
Example
Step3: Bravo! You've saved your colleagues the tedium of creating empty textgrids for each wav file from scratch and removed one vector of human error from your workflow.
There are more things we can do with bare textgrid files, but for now let's move on to how to work with existing textgrid files in praatio.
<hr>
<a id="opening_tgs"></a>
Opening TextGrids
We know how to save textgrids. Now lets learn how to open them.
Step4: <hr>
<a id="getting_tg_and_tier_info"></a>
Getting information from TextGrids and Tiers
So we've opened a TextGrid file. What happens next? A textgrid is just a container for tiers. So after opening a textgrid, probably the first thing you'll want to do is access the tiers.
A TextGrid's tiers are stored in a dictionary called tierDict. The names of the tiers, and their order in the TextGrid, are stored in tierNameList.
Step5: Ok, so with the TextGrid, we got a Tier. What happens next? Most of the time, you'll be accessing the intervals or points stored in the tier. These are stored in the entryList.
For a pointTier, the entryList looks like
Step6: I use this idiom--open textgrid, get target tier, and forloop through the entryList--on a regular basis. For clarity, here the whole idiom is presented in a concise example
Step7: <hr>
<a id="modifying_tgs_tiers"></a>
Modifying TextGrids and Tiers with new()
Textgrids and tiers come with a new() function. This function gives you a new copy of the current instance. The new() function takes the same arguments as the constructor for the object, except that with new() the arguments are all optional. Any arguments you don't provide, will be copied over from the original instance.
Step8: <hr>
<a id="textgrid_objects"></a>
Working with Textgrid objects
TextGrids are containers that store tiers. They come with some methods to help manage the state of the Textgrid.
Step9: The above featured functions are perhaps the most useful functions in praatio. But there are some other functions which I'll mention briefly here.
tg.appendTextgrid(tg2) will append tg2 to the end of tg--modifying all of the times in tg2 so that they appear chronologically after tg.
tg.eraseRegion(start, stop, doShrink) will erase a segment of a textgrid. The erased segment can be left blank, or the textgrid can shrink
tg.insertSpace(start, duration, collisionCode) will insert a blank segment into a textgrid. collisionCode determines what happens to segments that span the start location of the insertion.
tg.mergeTiers() will merge all tiers into a single tier. Overlapping intervals on different tiers will have their labels combined by this process.
<hr>
<a id="cropping_textgrids"></a>
Cropping TextGrids
The last function to look at with TextGrids is also a very useful and powerful one. Using crop() you can get a...well...cropped TextGrid.
Crop() takes five arguments
Step10: <hr>
<a id="working_with_tiers"></a>
Working with Tiers
Textgrids alone aren't very useful for working with data. The real utility is in the tiers contained in the textgrids. In this section we'll learn about functions that help us work with IntervalTiers and PointTiers.
We'll start with the easy stuff. eraseRegion(), insertSpace(), crop(), and editTimestamps() are back and they work exactly the same on tiers as they did for textgrids. If you only need to modify one tier, it's better to take advantage of this feature, rather than modifying a whole textgrid.
Now let's move on to some of the functions that are unique to tiers. We'll start with deleteEntry() and insertEntry()
Step11: <hr>
The next two functions are very useful when working in conjunction with other numerical data
Step12: <hr>
Often times, we only want to limit the entries that we analyze.
crop(), which we've already seen, allows us to limit by time. find() allows us to limit by label.
find() returns the index of any matches in the entryList. It takes one required argument
Step13: <hr>
<a id="example_extract_from_intervals">
Example
Step14: <hr>
<a id="operations_between_tiers"></a>
Operations between Tiers
A lot of functions require a second tier. Two are only useful in specific situations. I'll just introduce them here
Step15: That output might be a little hard to visualize. Here is what the output looks like in a textgrid | Python Code:
!pip install praatio --upgrade
Explanation: PraatIO - doing speech analysis with Python
An introduction and tutorial
<hr>
TABLE OF CONTENTS
An introduction
- <a href="#what_is_praat">What is Praat?</a>
- <a href="#textgrids_and_tiers">TextGrids, IntervalTiers, and PointTiers</a>
- <a href="#physical_textgrids">The physical textgrid</a>
- <a href="#what_is_praatio">What is PraatIO</a>
- <a href="#uses_of_praatio">What are some uses of PraatIO?</a>
A tutorial
- <a href="#installing_praatio">Installing PraatIO</a>
- <a href="#creating_bare">Creating bare textgrid files</a>
- <a href="#example_create_blank_textgrids">Example: Creating blank textgrids from audio recordings</a>
- <a href="#opening_tgs">Opening TextGrids</a>
- <a href="#getting_tg_and_tier_info">Getting information from TextGrids and Tiers</a>
- <a href="#modifying_tgs_tiers">Modifying TextGrids and Tiers with new()</a>
- <a href="#textgrid_objects">Working with Textgrid objects</a>
- <a href="#example_extract_from_intervals">Example: Extracting time series data from specific intervals</a>
- <a href="#cropping_textgrids">Cropping TextGrids</a>
- <a href="#working_with_tiers">Working with Tier objects</a>
- <a href="#operations_between_tiers">Operations between tiers</a>
<a href="#summary">Summary</a>
<a href="#beyond_praatio">Beyond PraatIO</a>
- ProMo
- Pysle
<hr>
An introduction
<a id="what_is_praat"></a>
What is Praat?
PraatIO or Praat Input and Output was originally conceived as a way to work with Praat from within python.
Praat (http://www.fon.hum.uva.nl/praat/) is a freely available tool for doing speech and phonetic analysis. It has a spectrogram visualization tool with overlays of information like pitch track and intensity. This visualization is paired with an editor for transcribing speech and for analyzing speech. It also has tools for extracting acoustic parameters of speech, generating waveforms, and resynthesizing speech. It is a comprehensive and indispensible tool for speech scientists.
Praat comes with its own scripting language for automating tasks.
<a id="textgrids_and_tiers"></a>
TextGrids, IntervalTiers, and PointTiers
The heart of any speech analysis is a transcript. Praat calls its transcript files TextGrids, and the same terminology is used here.
More specifically, a TextGrid, used by Praat or PraatIO, is a collection of independent annotation analyses for a given audio recording. Each layer of analysis is known as a tier. There are two kinds of tiers: IntervalTiers and PointTiers. IntervalTiers are used to annotate events that have duration. Like syllables, words, or utterances. PointTiers are used for annotating instaneous events. Like places where the audio clipped. The peak of a pitch contour. Or the sound of a clap. Etc.
Below is a sample textgrid as seen in Praat, with accompanying wavfile. In this example the textgrid contains two interval tiers and a point tier (named 'phone', 'word', and 'maxF0' respectively). 'phone' marks the phonemes--the consonants and vowels of the words. 'word' indicates the word boundaries. And 'maxF0' marks the highest peaks of the pitch contour (the blue curve superimposed over the spectrogram) for each word.
<a id="physical_textgrids"></a>
The physical textgrid
Praat has its own plain text format for working TextGrids. It's an easy to read and easy to parse. Here is a small snippet. No magic or wizardy. The TextGrid has a few properties defined--min and max times (the start and end of textgrid with respect to the audio file) and the number of tiers. Then the tiers are presented in order of appearance. They too have a few properties defined, followed by the intervals and points that they contain. And thats it!
There is a more condensed version of the TextGrid which contains the same information but without the extraneous text that makes the below example so easy to read. I won't cover that here.
```
File type = "ooTextFile"
Object class = "TextGrid"
xmin = 0
xmax = 1.869687
tiers? <exists>
size = 3
item []:
item [1]:
class = "IntervalTier"
name = "phone"
xmin = 0
xmax = 1.869687
intervals: size = 16
intervals [1]:
xmin = 0
xmax = 0.3154201182247563
text = ""
intervals [2]:
xmin = 0.3154201182247563
xmax = 0.38526757369599995
text = "m"
intervals [3]:
xmin = 0.38526757369599995
xmax = 0.4906833231456586
text = "ə"
```
<a id="what_is_praatio"></a>
What is PraatIO
I think python should have its own library for doing robust speech analysis. Thats where praatIO comes in.
PraatIO is not a python implementation of Praat or an interface to Praat. PraatIO is a pure python library that contains a robust toolset for creating, querying, and modifying textgrid annotations. It also comes with a diverse array of tools that use these annotations to modify speech or extract information from speech. It depends on Praat for some but not all of its functionality.
<a id="uses_of_praatio"></a>
What are some uses of PraatIO?
Creating textgrids for annotation in a consistent manner
Extracting pitch, intensity, and duration from labeled regions of interest
Extracting user-made annotations or verifying user-made annotations
Extracting subsegments of recordings, substituting segments with other segments, making supercuts
<hr>
A tutorial
<a id="installing_praatio"></a>
Installing PraatIO
Before we can run PraatIO, we need to install it. It can be installed easily enough using pip like so. For other installation options, see the main github page for praatio.
End of explanation
from praatio import textgrid
# Textgrids take no arguments--it gets all of its necessary attributes from the tiers that it contains.
tg = textgrid.Textgrid()
# IntervalTiers and PointTiers take four arguments: the tier name, a list of intervals or points,
# a starting time, and an ending time.
wordTier = textgrid.IntervalTier('words', [], 0, 1.0)
maxF0Tier = textgrid.PointTier('maxF0', [], 0, 1.0)
tg.addTier(wordTier)
tg.addTier(maxF0Tier)
tg.save("empty_textgrid.TextGrid", format="short_textgrid", includeBlankSpaces=False)
Explanation: <hr>
<a id="creating_bare"></a>
Creating bare textgrid files
The code for working with textgrids is contained in textgrid.py. This file does not store the classes we will be working with, but it does expose them for convenient access. There are three classes of particular interest and some functions. Let's start with the basics.
TextGrid, IntervalTier, and PointTier are the three classes we'll be using a lot in textgrid.py
For our first, most basic example, we create a TextGrid with a blank IntervalTier and a blank PointTier--ripe for annotating. A minimal Textgrid needs at least one tier but can have as many as you want.
End of explanation
import os
from os.path import join
from praatio import textgrid
from praatio import audio
inputPath = join('..', 'examples', 'files')
outputPath = join(inputPath, "generated_textgrids")
if not os.path.exists(outputPath):
os.mkdir(outputPath)
for fn in os.listdir(inputPath):
name, ext = os.path.splitext(fn)
if ext != ".wav":
continue
duration = audio.getDuration(join(inputPath, fn))
wordTier = textgrid.IntervalTier('words', [], 0, duration)
tg = textgrid.Textgrid()
tg.addTier(wordTier)
tg.save(join(outputPath, name + ".TextGrid"), format="short_textgrid", includeBlankSpaces=False)
# Did it work?
for fn in os.listdir(outputPath):
ext = os.path.splitext(fn)[1]
if ext != ".TextGrid":
continue
print(fn)
Explanation: <a id="example_create_blank_textgrids">
Example: Creating blank textgrids from audio recordings
</a>
The above example gets the job done, but it's frankly not very useful. What about a basic example that does something we would actually need?
One problem with the above example is the ending time. It's 1.0. More often then not, we don't know the exact length of an audio file. The start and end time are actually optional arguments to the constructor. If not supplied, praat will get them from the min and max values in the list of intervals or points (not generally recommended). Alternatively, one can supply a wave file to set the ending time of the tier.
Scenario: You have a large corpus of speech recordings--telephone conversations. You are coordinating a team of annotators who will transcribe the words in the corpus. Rather than have the annotators create textgrids from scratch, you use praatio to generate skelaton textgrids that they can fill in themselves.
End of explanation
from os.path import join
from praatio import textgrid
inputFN = join('..', 'examples', 'files', 'mary.TextGrid')
tg = textgrid.openTextgrid(inputFN, includeEmptyIntervals=False) # Give it a file name, get back a Textgrid object
Explanation: Bravo! You've saved your colleagues the tedium of creating empty textgrids for each wav file from scratch and removed one vector of human error from your workflow.
There are more things we can do with bare textgrid files, but for now let's move on to how to work with existing textgrid files in praatio.
<hr>
<a id="opening_tgs"></a>
Opening TextGrids
We know how to save textgrids. Now lets learn how to open them.
End of explanation
# What tiers are stored in this textgrid?
print(tg.tierNameList)
# It's possible to access the tiers by their position in the TextGrid
# (i.e. the order they were added in)
firstTier = tg.tierDict[tg.tierNameList[0]]
# Or by their names
wordTier = tg.tierDict['word']
print(firstTier)
Explanation: <hr>
<a id="getting_tg_and_tier_info"></a>
Getting information from TextGrids and Tiers
So we've opened a TextGrid file. What happens next? A textgrid is just a container for tiers. So after opening a textgrid, probably the first thing you'll want to do is access the tiers.
A TextGrid's tiers are stored in a dictionary called tierDict. The names of the tiers, and their order in the TextGrid, are stored in tierNameList.
End of explanation
# I just want the labels from the entryList
labelList = [entry[2] for entry in wordTier.entryList]
print(labelList)
# Get the duration of each interval
# (in this example, an interval is a word, so this outputs word duration)
durationList = []
for start, stop, _ in wordTier.entryList:
durationList.append(stop - start)
print(durationList)
Explanation: Ok, so with the TextGrid, we got a Tier. What happens next? Most of the time, you'll be accessing the intervals or points stored in the tier. These are stored in the entryList.
For a pointTier, the entryList looks like:
[(timeV1, label1), (timeV2, label2), ...]
While for an intervalTier, the entryList looks like:
[(startV1, endV1, label1), (startV2, endV2, label2), ...]
End of explanation
# Print out each interval on a separate line
from os.path import join
from praatio import textgrid
inputFN = join('..', 'examples', 'files', 'mary.TextGrid')
tg = textgrid.openTextgrid(inputFN, includeEmptyIntervals=False)
tier = tg.tierDict['word']
for start, stop, label in tier.entryList:
print("From:%f, To:%f, %s" % (start, stop, label))
Explanation: I use this idiom--open textgrid, get target tier, and forloop through the entryList--on a regular basis. For clarity, here the whole idiom is presented in a concise example
End of explanation
# Sometimes you just want to have two copies of something
newTG = tg.new()
newTier = tier.new()
# emptiedTier and renamedTier are the same as tier, except for the parameter specified in .new()
emptiedTier = tier.new(entryList=[]) # Remove all entries in the entry list
renamedTier = tier.new(name="lexical items") # Rename the tier to 'lexical items'
Explanation: <hr>
<a id="modifying_tgs_tiers"></a>
Modifying TextGrids and Tiers with new()
Textgrids and tiers come with a new() function. This function gives you a new copy of the current instance. The new() function takes the same arguments as the constructor for the object, except that with new() the arguments are all optional. Any arguments you don't provide, will be copied over from the original instance.
End of explanation
# Let's reload everything
from os.path import join
from praatio import textgrid
inputFN = join('..', 'examples', 'files', 'mary.TextGrid')
tg = textgrid.openTextgrid(inputFN, includeEmptyIntervals=False)
# Ok, what were our tiers?
print(tg.tierNameList)
# We've already seen how to add a new tier to a TextGrid
# Here we add a new tier, 'utterance', which has one entry that spans the length of the textgrid
utteranceTier = textgrid.IntervalTier(name='utterance', entryList=[('0', tg.maxTimestamp, 'mary rolled the barrel'), ],
minT=0, maxT=tg.maxTimestamp)
tg.addTier(utteranceTier)
print(tg.tierNameList)
# Maybe we decided that we don't need the phone tier. We can remove it using the tier's name.
# The remove function returns the removed tier, in case you want to do something with it later.
wordTier = tg.removeTier('word')
print(tg.tierNameList)
print(wordTier)
# We can also replace one tier with another like so (preserving the order of the tiers)
tg.replaceTier('phone', wordTier)
print(tg.tierNameList)
# Or rename a tier
tg.renameTier('word', 'lexical items')
print(tg.tierNameList)
Explanation: <hr>
<a id="textgrid_objects"></a>
Working with Textgrid objects
TextGrids are containers that store tiers. They come with some methods to help manage the state of the Textgrid.
End of explanation
# Let's start by observing the pre-cropped entry lists
wordTier = tg.tierDict['lexical items']
print(wordTier.entryList)
utteranceTier = tg.tierDict['utterance']
print(utteranceTier.entryList)
print("Start time: %f" % wordTier.minTimestamp)
print("End time: %f" % utteranceTier.maxTimestamp)
# Now let's crop and see what changes!
# Crop takes four arguments
# If mode is 'truncated', all intervals contained within the crop region will appear in the
# returned TG--however, intervals that span the crop region will be truncated to fit within
# the crop region
# If rebaseToZero is True, the times in the textgrid are recalibrated with the start of
# the crop region being 0.0s
croppedTG = tg.crop(0.5, 1.0, mode='truncated', rebaseToZero=True)
wordTier = croppedTG.tierDict['lexical items']
print(wordTier.entryList)
utteranceTier = croppedTG.tierDict['utterance']
print(utteranceTier.entryList)
print("Start time: %f" % croppedTG.minTimestamp)
print("End time: %f" % croppedTG.maxTimestamp)
# If rebaseToZero is False, the values in the cropped textgrid will be what they were in the
# original textgrid (but without values outside the crop region)
# Compare the output here with the output above
croppedTG = tg.crop(0.5, 1.0, mode='truncated', rebaseToZero=False)
wordTier = croppedTG.tierDict['lexical items']
print(wordTier.entryList)
utteranceTier = croppedTG.tierDict['utterance']
print(utteranceTier.entryList)
print("Start time: %f" % croppedTG.minTimestamp)
print("End time: %f" % croppedTG.maxTimestamp)
# If mode is 'strict', only wholly contained intervals will be included in the output.
# Compare this with the previous result
croppedTG = tg.crop(0.5, 1.0, mode='strict', rebaseToZero=False)
# Let's start by observing the pre-cropped entry lists
wordTier = croppedTG.tierDict['lexical items']
print(wordTier.entryList)
utteranceTier = croppedTG.tierDict['utterance']
print(utteranceTier.entryList)
print("Start time: %f" % croppedTG.minTimestamp)
print("End time: %f" % croppedTG.maxTimestamp)
# If mode is 'lax', partially contained intervals will be wholly contained in the outpu.
# Compare this with the previous result
croppedTG = tg.crop(0.5, 1.0, mode='lax', rebaseToZero=False)
# Let's start by observing the pre-cropped entry lists
wordTier = croppedTG.tierDict['lexical items']
print(wordTier.entryList)
utteranceTier = croppedTG.tierDict['utterance']
print(utteranceTier.entryList)
print("Start time: %f" % croppedTG.minTimestamp)
print("End time: %f" % croppedTG.maxTimestamp)
Explanation: The above featured functions are perhaps the most useful functions in praatio. But there are some other functions which I'll mention briefly here.
tg.appendTextgrid(tg2) will append tg2 to the end of tg--modifying all of the times in tg2 so that they appear chronologically after tg.
tg.eraseRegion(start, stop, doShrink) will erase a segment of a textgrid. The erased segment can be left blank, or the textgrid can shrink
tg.insertSpace(start, duration, collisionCode) will insert a blank segment into a textgrid. collisionCode determines what happens to segments that span the start location of the insertion.
tg.mergeTiers() will merge all tiers into a single tier. Overlapping intervals on different tiers will have their labels combined by this process.
<hr>
<a id="cropping_textgrids"></a>
Cropping TextGrids
The last function to look at with TextGrids is also a very useful and powerful one. Using crop() you can get a...well...cropped TextGrid.
Crop() takes five arguments:
Crop(startTime, endTime, strictFlag, softFlag, rebaseToZero)
startTime, endTime - these are the start and end times that define the crop region. Simple enough
strictFlag - if True, only wholly contained intervals will be included in the output. Intervals that are partially contained are not included. If False, all intervals that are at least partially contained will be included in the cropped textgrid.
softFlag - if False, the crop boundaries are firm. if True and strictFlag is False, partially contained boundaries will extend the boundaries of the crop interval
rebaseToZero - if True, the entry time values will be subtracted by startTime.
Let's see the effects of these different arguments:
End of explanation
# Let's reload everything, just as before
from os.path import join
from praatio import textgrid
inputFN = join('..', 'examples', 'files', 'mary.TextGrid')
tg = textgrid.openTextgrid(inputFN, includeEmptyIntervals=False)
# Ok, what are our tiers?
print(tg.tierNameList)
# The entryList, which holds the tier point or interval data, is the heart of the tier.
# Recall the 'new()' function, if you want to modify all of the entries in a tier at once
wordTier = tg.tierDict['word']
newEntryList = [(start, stop, 'bloop') for start, stop, label in wordTier.entryList]
newWordTier = wordTier.new(entryList=newEntryList)
print(wordTier.entryList)
print(newWordTier.entryList)
# If, however, we only want to modify a few entries, there are some functions for doing so
# deleteEntry() takes an entry and deletes it
maryEntry = wordTier.entryList[0]
wordTier.deleteEntry(maryEntry)
print(wordTier.entryList)
# insertEntry() does the opposite of deleteEntry.
wordTier.insertEntry(maryEntry)
print(wordTier.entryList)
print()
# you can also set the collision code to 'merge' or 'replace' to set the behavior in the event an entry already exists
# And the collisionReportingMode can be used to have warnings printed out when a collision occurs
wordTier.insertEntry((maryEntry[0], maryEntry[1], 'bob'), collisionMode='replace', collisionReportingMode='silence')
print(wordTier.entryList)
Explanation: <hr>
<a id="working_with_tiers"></a>
Working with Tiers
Textgrids alone aren't very useful for working with data. The real utility is in the tiers contained in the textgrids. In this section we'll learn about functions that help us work with IntervalTiers and PointTiers.
We'll start with the easy stuff. eraseRegion(), insertSpace(), crop(), and editTimestamps() are back and they work exactly the same on tiers as they did for textgrids. If you only need to modify one tier, it's better to take advantage of this feature, rather than modifying a whole textgrid.
Now let's move on to some of the functions that are unique to tiers. We'll start with deleteEntry() and insertEntry()
End of explanation
# Let's say we have some time series data
# Where the data is organized as [(timeV1, dataV1a, dataV1b, ...), (timeV2, dataV2a, dataV2b, ...), ...]
dataValues = [(0.1, 15), (0.2, 98), (0.3, 105), (0.4, 210), (0.5, ),
(0.6, 154), (0.7, 181), (0.8, 110), (0.9, 203), (1.0, 240)]
# Often times when working with such data, we want to know which data
# corresponds to certain speech events
# e.g. what was the max pitch during the stressed vowel of a particular word etc...
intervalDataList = wordTier.getValuesInIntervals(dataValues)
# The returned list is of the form [(interval1, )]
for interval, subDataList in intervalDataList:
print(interval)
print(subDataList)
print()
Explanation: <hr>
The next two functions are very useful when working in conjunction with other numerical data:
IntervalTier.getValuesInIntervals() and PointTier.getValuesAtPoints()
End of explanation
bobWordIList = wordTier.find('bob')
bobWord = wordTier.entryList[bobWordIList[0]]
print(bobWord)
Explanation: <hr>
Often times, we only want to limit the entries that we analyze.
crop(), which we've already seen, allows us to limit by time. find() allows us to limit by label.
find() returns the index of any matches in the entryList. It takes one required argument: the string to match and two optional arguments: a flag for allowing partial matches, and a flag for running searches as a regular expression
End of explanation
import os
from os.path import join
from praatio import textgrid
from praatio import pitch_and_intensity
# For pitch extraction, we need the location of praat on your computer
#praatEXE = r"C:\Praat.exe"
praatEXE = "/Applications/Praat.app/Contents/MacOS/Praat"
# The 'os.getcwd()' is kindof a hack. With jypter __file__ is undefined and
# os.getcwd() seems to default to the praatio installation files.
rootPath = join(os.getcwd(), '..', 'examples', 'files')
pitchPath = join(rootPath, "pitch_extraction", "pitch")
fnList = [('mary.wav', 'mary.TextGrid'),
('bobby.wav', 'bobby_words.TextGrid')]
# The names of interest -- in an example working with more data, this would be more comprehensive
nameList = ['mary', 'BOBBY', 'lisa', 'john', 'sarah', 'tim', ]
outputList = []
for wavName, tgName in fnList:
pitchName = os.path.splitext(wavName)[0] + '.txt'
tg = textgrid.openTextgrid(join(rootPath, tgName), includeEmptyIntervals=False)
# 1 - get pitch values
pitchList = pitch_and_intensity.extractPitch(join(rootPath, wavName),
join(pitchPath, pitchName),
praatEXE, 50, 350,
forceRegenerate=True)
# 2 - find the intervals where a name was spoken
nameIntervals = []
targetTier = tg.tierDict['word']
for name in nameList:
findMatches = targetTier.find(name)
for i in findMatches:
nameIntervals.append(targetTier.entryList[i])
# 3 - isolate the relevant pitch values
matchedIntervals = []
intervalDataList = []
for entry in nameIntervals:
start, stop, label = entry
croppedTier = targetTier.crop(start, stop, "truncated", False)
intervalDataList = croppedTier.getValuesInIntervals(pitchList)
matchedIntervals.extend(intervalDataList)
# 4 - find the maximum value
for interval, subDataList in intervalDataList:
pitchValueList = [pitchV for timeV, pitchV in subDataList]
maxPitch = max(pitchValueList)
outputList.append((wavName, interval, maxPitch))
# Output results
for name, interval, value in outputList:
print((name, interval, value))
Explanation: <hr>
<a id="example_extract_from_intervals">
Example: Extracting time series data from specific intervals
</a>
To end this subsection, let's try another real life example using what we've learned. For this, we're going to learn one praatio tool that fits outside of the scope of this tutorial.
This function can be used to extract time and pitch values from audio recordings: praatio.pitch_and_intensity.audioToPI(). Don't worry too much about how the function works--thats for another tutorial.
Scenario: You want to examine the maximum pitch that was produced whenever a speaker was saying someone's name. You're not interested in the pitch for other words.
To do this, 1) we're first going to have to extract the pitch from audio recordings, 2) then we're going to need to find when the words were spoken, 3) then we'll isolate the relevant pitch values for each word, and 4) finally find the maximum value.
End of explanation
# Let's reload everything
from os.path import join
from praatio import textgrid
# We'll use a special textgrid for this purpose
inputFN = join('..', 'examples', 'files', 'damon_set_test.TextGrid')
tg = textgrid.openTextgrid(inputFN, includeEmptyIntervals=False)
# Ok, what are our tiers?
print(tg.tierNameList)
# Let's take set operations between these two tiers
syllableTier = tg.tierDict['tonicSyllable']
errorTier = tg.tierDict['manually_labeled_pitch_errors']
print(syllableTier.entryList)
print(errorTier.entryList)
# Set difference -- the entries that are not in errorTier are kept
diffTier = syllableTier.difference(errorTier)
diffTier = diffTier.new(name="different")
print(diffTier.entryList)
# Set intersection -- the overlapping regions between the two tiers are kept
interTier = syllableTier.intersection(errorTier)
interTier = interTier.new(name="intersection")
print(interTier.entryList)
# Set union -- the two tiers are merged
unionTier = syllableTier.union(errorTier)
unionTier = unionTier.new(name="union")
print(unionTier.entryList)
Explanation: <hr>
<a id="operations_between_tiers"></a>
Operations between Tiers
A lot of functions require a second tier. Two are only useful in specific situations. I'll just introduce them here:
append() is a function on tiers that appends a tier to another one. Could be useful if you are combining two audio files that have been transcribed.
morph() changes the duration of labeled segments in one textgrid to that of another while leaving silences alone and leaving alone the labels. It's used by my ProMo library. Maybe you'll find some other use for it.
Of more general use, there are the functions that do set operations: difference(), intersection(), and union()
End of explanation
outputFN = join('..', 'examples', 'files', 'damon_set_test_output.TextGrid')
setTG = textgrid.Textgrid()
for tier in [syllableTier, errorTier, diffTier, interTier, unionTier]:
setTG.addTier(tier)
setTG.save(outputFN, format="short_textgrid", includeBlankSpaces=True)
Explanation: That output might be a little hard to visualize. Here is what the output looks like in a textgrid:
Just for more practice, this textgrid could be generated with code like the following:
End of explanation |
8,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
My notebook to practics Pandas
This is some notes
Step1: Working with series
loc uses the specified index and it is inclusive
iloc uses the python index and is exclusive
Step2: Working with Data Frames
can create from a list of lists
can do it from a dictionaries
or you can import from somewhere like a csv
Step3: reading csv file
Step4: Slicing
Step5: Groupbys
Step6: Conditionals
Note that we use the & and | to string conditionals together because we use NumPy's masking rules
Step7: Sorting
Step8: Dropping and adding columns
The argument axis=1 means that we are droppinng a column.
The argument inplace=True tells it to modify the dataframe
If we used inplace==False then it would return a copy that we could use, but the original would remain unmodified. | Python Code:
import pandas as pd
Explanation: My notebook to practics Pandas
This is some notes
End of explanation
## difference between loc and iloc
vals = [0, 1, 2]
idx = [10, 11, 12]
ser = pd.Series(vals, index=idx)
print("...using loc")
print(ser.loc[10:11])
print("\n...using iloc")
print(ser.iloc[0:2])
## creating series
vals = [0, 1, 2, 3]
idx = ['a', 'b', 'c', 'd']
ser = pd.Series(vals, index=idx)
print(ser)
print("\n...single index")
print(ser.loc['a'])
print("\n...a slice")
print(ser.loc['a':'c'])
Explanation: Working with series
loc uses the specified index and it is inclusive
iloc uses the python index and is exclusive
End of explanation
## from a list of lists
vals = [[1,2,3], [4,5,6]]
cols = ['a', 'b', 'c']
df = pd.DataFrame(data=vals, columns=cols)
df
## from a list of dictionaried
rows = [{'a': 1, 'b': 2, 'c':3}, {'a': 4, 'b': 5, 'c':6}]
df = pd.DataFrame(rows)
df
## how does pandas deal with missing numeric values
vals = [[10, 11], [20, 21, 22]]
cols = ['a', 'c', 'd']
df = pd.DataFrame(data=vals, columns=cols)
df
## how does pandas deal with missing string values
vals = [['z', 'y'], ['x', 'w', 'v']]
cols = ['a', 'c', 'd']
df = pd.DataFrame(data=vals, columns=cols)
df
df.transpose()
df
## dealing with missing data
vals = [[10, 11], [20, 21, 22]]
cols = ['a', 'c', 'd']
df1 = pd.DataFrame(data=vals, columns=cols)
df1.dropna(inplace=True)
df1
## fill them in with something
df2 = pd.DataFrame(data=vals, columns=cols)
df2.fillna(-1, inplace=True)
df2
Explanation: Working with Data Frames
can create from a list of lists
can do it from a dictionaries
or you can import from somewhere like a csv
End of explanation
df = pd.read_csv('winequality-red.csv', delimiter=';')
df.head()
import pprint
print("Shape of my data frame: {} x {}".format(df.shape[0],df.shape[1]))
df.describe()
import re
## clean up the column names
cols = df.columns.tolist()
df.columns = [re.sub("\s+","_",col) for col in df.columns.tolist()]
## print the columns description
print("Shape of my data frame: {} x {}".format(df.shape[0],df.shape[1]))
pp = pprint.PrettyPrinter(indent=2)
pp.pprint(list(df.columns))
## basic statistical calculations
df.describe()
Explanation: reading csv file
End of explanation
## single column reference returns a slice
a_column = df['fixed_acidity']
print(type(a_column))
## multiple column references returns another dataframe
some_columns = df[['fixed_acidity','citric_acid']]
print(type(some_columns))
## get rows from a given column using loc
df.loc[100:102, 'fixed_acidity']
## get rows from a given column using iloc
df['fixed_acidity'].iloc[100:102]
## get rows from a multiple columns using iloc
df[['fixed_acidity','chlorides']].iloc[100:102]
Explanation: Slicing
End of explanation
df_quality = df.groupby("quality")
df_quality.mean()
Explanation: Groupbys
End of explanation
mask = df['alcohol'] > 12.0
df1 = df[mask]
df1.groupby("quality").mean()
## multiple conditionals
df[(df['alcohol'] > 12.0) & (df['quality'] > 5)].groupby("quality").mean()
## Can also use query to evaluate conditionals
result_df = df.query('alcohol >= 9.10 and pH < 3.5')
result_df.loc[1:6, ['alcohol', 'pH']]
Explanation: Conditionals
Note that we use the & and | to string conditionals together because we use NumPy's masking rules
End of explanation
## single column sort
df.sort_values(by='fixed_acidity', ascending=True)
## multicolumn sort
df.sort_values(by=['fixed_acidity', 'volatile_acidity'], ascending=True)
Explanation: Sorting
End of explanation
df.drop('sulphates', axis=1, inplace=True)
df.info()
## Using eval
df.eval('total_acidity = volatile_acidity + fixed_acidity', inplace=True)
df[['total_acidity', 'volatile_acidity', 'fixed_acidity']].head()
Explanation: Dropping and adding columns
The argument axis=1 means that we are droppinng a column.
The argument inplace=True tells it to modify the dataframe
If we used inplace==False then it would return a copy that we could use, but the original would remain unmodified.
End of explanation |
8,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algo - TSP - Traveling Salesman Problem
TSP, Traveling Salesman Problem ou Problème du Voyageur de Commerce est un problème classique. Il s'agit de trouver le plus court chemin passant par des villes en supposant qu'il existe une route entre chaque paire de villes.
Step1: Enoncé
On part d'un ensemble de villes aléatoires.
Step2: Q1
Step3: La première étape consiste à calculer la distance d'un chemin passant par toutes les villes.
Step4: Ensuite, pour voir la solution, on insère le code qui permet de dessiner le chemin dans une fonction.
Step5: Q2
On rédige l'algorithme.
Step6: C'est pas extraordinaire.
Q3
Lorsque deux segments du chemin se croisent, il est possible de construire un autre chemin plus court en retournant une partie du chemin.
Step7: Il n'y a plus de croisements, ce qui est l'effet recherché.
Q4
On pourrait combiner ces deux fonctions pour améliorer l'algorithme qui resterait sans doute très long pour un grand nombre de villes. On pourrait initialiser l'algorithme avec une permutation moins aléatoire pour accélérer la convergence. Pour ce faire, on regroupe les deux villes les plus proches, puis de proche en proche...
Step8: Pas si mal... Il reste un croisement. On applique la fonction de la question précédente. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
Explanation: Algo - TSP - Traveling Salesman Problem
TSP, Traveling Salesman Problem ou Problème du Voyageur de Commerce est un problème classique. Il s'agit de trouver le plus court chemin passant par des villes en supposant qu'il existe une route entre chaque paire de villes.
End of explanation
import numpy
import matplotlib.pyplot as plt
villes = numpy.random.rand(20, 2)
plt.plot(villes[:, 0], villes[:, 1], 'b-o')
plt.plot([villes[0, 0], villes[-1, 0]],
[villes[0, 1], villes[-1, 1]], 'b-o');
Explanation: Enoncé
On part d'un ensemble de villes aléatoires.
End of explanation
plt.plot(villes[:, 0], villes[:, 1], 'b-o')
plt.plot([villes[0, 0], villes[-1, 0]],
[villes[0, 1], villes[-1, 1]], 'b-o');
Explanation: Q1 : choisir une permutation aléatoire des villes et calculer la distance du chemin qui les relie dans cet ordre
Q2 : tirer deux villes aléatoirement, les inverser, garder la permutation si elle améliore la distance
Q3 : choisir deux villes aléatoirement, permuter une des deux moitiés...
Q4 : tester toutes les permutations possibles... je plaisante...
Choisir les deux villes les plus proches, les relier, recommencer, puis... vous trouverez bien quelque chose pour finir.
Réponses
Q1
On redessine le parcours entre les villes.
End of explanation
def distance_ville(v1, v2):
return numpy.sum((v1 - v2) ** 2) ** 0.5
def distance_tour(villes, permutation):
tour = distance_ville(villes[permutation[0]],
villes[permutation[-1]])
for i in range(0, len(permutation) - 1):
tour += distance_ville(villes[permutation[i]],
villes[permutation[i + 1]])
return tour
distance_tour(villes, list(range(villes.shape[0])))
Explanation: La première étape consiste à calculer la distance d'un chemin passant par toutes les villes.
End of explanation
def dessine_tour(villes, perm):
fig, ax = plt.subplots(1, 1, figsize=(4, 4))
ax.plot(villes[perm, 0], villes[perm, 1], 'b-o')
ax.plot([villes[perm[0], 0], villes[perm[-1], 0]],
[villes[perm[0], 1], villes[perm[-1], 1]], 'b-o')
ax.set_title("dist=%f" % distance_tour(villes, perm))
return ax
perm = list(range(villes.shape[0]))
dessine_tour(villes, perm);
Explanation: Ensuite, pour voir la solution, on insère le code qui permet de dessiner le chemin dans une fonction.
End of explanation
def ameliore_tour(villes, perm=None):
# On copie la permutation perm pour éviter de modifier celle
# transmise à la fonction. Si la permutation est vide,
# on lui affecte la permutation identique.
perm = (perm.copy() if perm is not None
else list(range(villes.shape[0])))
# On calcule la distance actuelle.
dist_min = distance_tour(villes, perm)
# Initialisation.
cont = True
nb_perm, nb_iter = 0, 0
# Tant que la distance n'est pas améliorée dans les dernières
# len(perm) itérations.
while cont or nb_iter < len(perm):
nb_iter += 1
# On tire deux villes au hasard.
a = numpy.random.randint(0, len(perm) - 2)
b = numpy.random.randint(a + 1, len(perm) - 1)
# On permute les villes.
perm[a], perm[b] = perm[b], perm[a]
# On calcule la nouvelle distance.
dist = distance_tour(villes, perm)
# Si elle est meilleure...
if dist < dist_min:
# On la garde.
dist_min = dist
cont = True
nb_perm += 1
nb_iter = 0
else:
# Sinon, on annule la modification.
perm[a], perm[b] = perm[b], perm[a]
cont = False
return dist_min, nb_perm, perm
dist, nb_perm, perm = ameliore_tour(villes)
print("nb perm", nb_perm)
dessine_tour(villes, perm);
Explanation: Q2
On rédige l'algorithme.
End of explanation
def ameliore_tour_renversement(villes, perm=None):
perm = (perm.copy() if perm is not None
else list(range(villes.shape[0])))
dist_min = distance_tour(villes, perm)
cont = True
nb_perm, nb_iter = 0, 0
while cont or nb_iter < len(perm) ** 2:
nb_iter += 1
# Une partie qui change. On fait une copie de la permutation.
p0 = perm.copy()
a = numpy.random.randint(0, len(perm) - 2)
b = numpy.random.randint(a + 1, len(perm) - 1)
# On retourne une partie de cette permutation.
if a == 0:
perm[0:b] = perm[b:0:-1]
perm[b] = p0[0]
else:
perm[a:b+1] = perm[b:a-1:-1]
# La suite est quasi-identique.
dist = distance_tour(villes, perm)
if dist < dist_min:
dist_min = dist
cont = True
nb_perm += 1
nb_iter = 0
else:
# On reprend la copie. C'est plus simple
# que de faire le retournement inverse.
perm = p0
cont = False
return dist_min, nb_perm, perm
dist, nb_perm, perm = ameliore_tour_renversement(villes)
print("nb perm", nb_perm)
dessine_tour(villes, perm);
Explanation: C'est pas extraordinaire.
Q3
Lorsque deux segments du chemin se croisent, il est possible de construire un autre chemin plus court en retournant une partie du chemin.
End of explanation
from scipy.spatial.distance import cdist
def build_permutation(villes):
pairs = cdist(villes, villes)
max_dist = pairs.ravel().max()
for i in range(villes.shape[0]):
pairs[i, i] = max_dist
arg = numpy.argmin(pairs, axis=1)
arg_dist = [(pairs[i, arg[i]], i, arg[i]) for i in range(villes.shape[0])]
mn = min(arg_dist)
perm = list(mn[1:])
pairs[perm[0], :] = max_dist
pairs[:, perm[0]] = max_dist
while len(perm) < villes.shape[0]:
last = perm[-1]
arg = numpy.argmin(pairs[last:last+1])
perm.append(arg)
pairs[perm[-2], :] = max_dist
pairs[:, perm[-2]] = max_dist
return perm
perm = build_permutation(villes)
dessine_tour(villes, perm);
Explanation: Il n'y a plus de croisements, ce qui est l'effet recherché.
Q4
On pourrait combiner ces deux fonctions pour améliorer l'algorithme qui resterait sans doute très long pour un grand nombre de villes. On pourrait initialiser l'algorithme avec une permutation moins aléatoire pour accélérer la convergence. Pour ce faire, on regroupe les deux villes les plus proches, puis de proche en proche...
End of explanation
dist, nb_perm, perm = ameliore_tour_renversement(villes, perm)
print("nb perm", nb_perm)
dessine_tour(villes, perm);
Explanation: Pas si mal... Il reste un croisement. On applique la fonction de la question précédente.
End of explanation |
8,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
熟悉Pandas Sklearn
CSV to DataFrame
Step1: 可视化数据对于识别模型中潜在的模式十分重要
Step2: 特征转换
除了'sex'特征之外,'age'是其次重要的特征,如果按照数据集中age的原始值来搞显然太离散了容易降低泛化能力导致过拟合,所以需要处理age将people划分到不同的年龄段组成的组中
Cabin特征每行记录都是以一个字母开头,显然第一个字母比后边的数字的更重要,所以把第一个字母单独抽取出来作为特征
Fare是另一个特征值连续的特征,需要简化,通过data_train.Fare.describe()获取特征的分布,
从name特征中抽取信息而不是使用全名,抽取last name 和 name 前缀称谓(Mr Mrs )然后拼起来作为新的特征
最后丢弃掉没有太大用处的特征(比如Ticket Name)
Step3: 特征处理的最后阶段
特征预处理的最后阶段是对标签型的数据标准化,skLearn里的LabelEncoder可以将唯一的string值转换成number数值,把数据变得对于各种算法来说更灵活可用.结果是对于人类而言不是太友好,但是对于机器刚刚好的一堆数值. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
data_train = pd.read_csv('./input/titanic/train.csv')
data_test = pd.read_csv('./input/titanic/test.csv')
data_train.sample(20)
Explanation: 熟悉Pandas Sklearn
CSV to DataFrame
End of explanation
sns.barplot(x='Embarked',y='Survived',hue='Sex',data=data_train)
sns.pointplot(x='Pclass',y='Survived',hue='Sex',data=data_train,palette={'male':'blue','female':'pink'},markers=['*','o'],linestyles=['--','-'])
Explanation: 可视化数据对于识别模型中潜在的模式十分重要
End of explanation
data_train.Fare.describe()
data_train.Sex.describe()
data_train.Name.describe()
def simplify_ages(df):
print('_'*10)
print(df.Age.head(10))
df.Age = df.Age.fillna(-0.5)
print('_'*10)
print(df.Age.head(10))
bins = (-1,0,5,12,18,25,35,60,120)
group_names = ['Unknown','Baby','Child','Teenager','Student','Young adult','Adult','Senior']
categories = pd.cut(df.Age,bins,labels=group_names)
df.Age = categories
print('_'*10)
print(df.Age.head(10))
return df
def simplify_cabins(df):
df.Cabin = df.Cabin.fillna('N')
df.Cabin = df.Cabin.apply(lambda x:x[0])
return df
def simplify_fares(df):
df.Fare = df.Fare.fillna(-0.5)
bins = (-1,0,8,15,31,1000)
group_names = ['Unknown','1_quartile','2_quartile','3_quartile','4_quartile']
categories = pd.cut(df.Fare,bins,labels=group_names)
df.Fare = categories
return df
def format_name(df):
df['Lname'] = df.Name.apply(lambda x:x.split(' ')[0])
df['NamePrefix'] = df.Name.apply(lambda x:x.split(' ')[1])
return df
def drop_features(df):
return df.drop(['Ticket','Name','Embarked'],axis=1)
def transform_features(df):
df = simplify_ages(df)
df = simplify_cabins(df)
df = simplify_fares(df)
df = format_name(df)
df = drop_features(df)
return df
data_train = transform_features(data_train)
data_test = transform_features(data_test)
print('='*20)
data_train.sample(20)
sns.barplot(x='Age',y='Survived',hue='Sex',data=data_train)
sns.barplot(x='Cabin',y='Survived',hue='Sex',data=data_train)
sns.barplot(x='Fare',y='Survived',hue='Sex',data=data_train)
Explanation: 特征转换
除了'sex'特征之外,'age'是其次重要的特征,如果按照数据集中age的原始值来搞显然太离散了容易降低泛化能力导致过拟合,所以需要处理age将people划分到不同的年龄段组成的组中
Cabin特征每行记录都是以一个字母开头,显然第一个字母比后边的数字的更重要,所以把第一个字母单独抽取出来作为特征
Fare是另一个特征值连续的特征,需要简化,通过data_train.Fare.describe()获取特征的分布,
从name特征中抽取信息而不是使用全名,抽取last name 和 name 前缀称谓(Mr Mrs )然后拼起来作为新的特征
最后丢弃掉没有太大用处的特征(比如Ticket Name)
End of explanation
from sklearn import preprocessing
def encode_features(df_train,df_test):
features = ['Fare']
Explanation: 特征处理的最后阶段
特征预处理的最后阶段是对标签型的数据标准化,skLearn里的LabelEncoder可以将唯一的string值转换成number数值,把数据变得对于各种算法来说更灵活可用.结果是对于人类而言不是太友好,但是对于机器刚刚好的一堆数值.
End of explanation |
8,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tah hráče
Je tam opravdu vše potřeba?
Step1: Není
Step2: Vyhodnocení piškvorek
Co by se tady dalo udělat jednodušeji?
Step3: Upravená varianta
Step4: Piškvorky
Step5: Chyby v programu a jejich řešení
Nejdříve chyby v syntaxi, na které si Python stěžuje ihned po přečtení programu.
Step6: Pro řešení těch dalších už musíme kód v naší funkci spustit, jinak se o jeho chování nic nedozvíme.
Step7: Neuložené soubory | Python Code:
def tah_hrace (pole):
'Vrátí herní pole se zaznamenaným tahem hráče'
t = 0
while t == 0:
pozice = int(input('Na které políčko chceš hrát? '))
if (pozice > 0) and (pozice<=20) and (pole[pozice-1] == '-'):
return tah(pole,pozice,'x')
t = 1
else:
print('Špatně zadaná pozice, zkus to znovu.')
Explanation: Tah hráče
Je tam opravdu vše potřeba?
End of explanation
def tah_hrace(pole):
'Vrátí herní pole se zaznamenaným tahem hráče'
while True:
pozice = int(input('Na které políčko chceš hrát? '))
if (pozice > 0) and (pozice<=20) and (pole[pozice-1] == '-'):
return tah(pole,pozice,'x')
else:
print('Špatně zadaná pozice, zkus to znovu.')
Explanation: Není
End of explanation
def vyhodnot(pole):
"Vyhodnotí stav pole."
krizek = "xxx"
kolecko = "ooo"
volno = "-"
if krizek in pole and kolecko not in pole:
return("x")
elif kolecko in pole and krizek not in pole:
return("o")
elif volno not in pole and krizek not in pole and kolecko not in pole:
return("!")
else:
None
Explanation: Vyhodnocení piškvorek
Co by se tady dalo udělat jednodušeji?
End of explanation
def vyhodnot(pole):
"Vyhodnotí stav pole."
if 'xxx' in pole:
return("x")
elif 'ooo' in pole:
return("o")
elif '-' not in pole:
return("!")
else:
return '-'
Explanation: Upravená varianta
End of explanation
from random import randrange
def vyhodnot(pole):
"Vyhodnotí stav pole."
if 'xxx' in pole:
return("x")
elif 'ooo' in pole:
return("o")
elif '-' not in pole:
return("!")
else:
return '-'
def tah(pole, pozice, symbol):
"Vrátí herní pole s daným symbolem umístěným na danou pozici."
return pole[:pozice] + symbol + pole[pozice + 1:]
def tah_hrace(herni_pole):
"Ptá se hráče na kterou pozici chce hrát a vrací herní pole se zaznamenaným tahem"
while True:
cislo_pozice = int(input("Na kterou pozici chceš hrát? "))
if cislo_pozice >= 0 and cislo_pozice < len(herni_pole) and herni_pole[cislo_pozice] == "-":
return tah(herni_pole, cislo_pozice, "x")
else:
print("Špatná pozice, zkus to znovu. ")
def tah_pocitace(herni_pole):
"Vrátí herní pole se zaznamenaným tahem počítače. "
while True:
cislo_pozice = randrange(len(herni_pole))
if herni_pole[cislo_pozice] == "-":
return tah(herni_pole, cislo_pozice, "o")
def piskvorky():
"Vygeneruje prázdné pole a střídá tah hráče a počítače. "
pole = "-" * 20
while True:
print(pole)
pole = tah_hrace(pole)
print(pole)
if vyhodnot(pole) != '-':
break
pole = tah_pocitace(pole)
if vyhodnot(pole) != '-':
break
print(pole)
if vyhodnot(pole) == '!':
print('Remíza!')
elif vyhodnot(pole) == 'x':
print('Vyhrála jsi!')
elif vyhodnot(pole) == 'o':
print('Vyhrál počítač!')
piskvorky()
Explanation: Piškvorky
End of explanation
def piskvorky1d(symbol):
symbol_hrac = input('Chces o nebo x?)
pole= '-' * 20
print(pole)
vysledek == '-'
while vysledek =='-'
tah_hrace(pole,symbol_hrac)
print(pole)
if symbol_hrac == 'o':
symbol_pocitac == 'x'
else:
symbol_pocitac == 'o'
tah_pocitace(pole,symbol_pocitac)
print(pole)
vysledek == vyhodnot(pole)
print(vysledek)
def piskvorky1d(symbol):
symbol_hrac = input('Chces o nebo x?')
pole= '-' * 20
print(pole)
vysledek == '-'
while vysledek =='-'
tah_hrace(pole,symbol_hrac)
print(pole)
if symbol_hrac == 'o':
symbol_pocitac == 'x'
else:
symbol_pocitac == 'o'
tah_pocitace(pole,symbol_pocitac)
print(pole)
vysledek == vyhodnot(pole)
print(vysledek)
def piskvorky1d(symbol):
symbol_hrac = input('Chces o nebo x?')
pole= '-' * 20
print(pole)
vysledek == '-'
while vysledek =='-':
tah_hrace(pole,symbol_hrac)
print(pole)
if symbol_hrac == 'o':
symbol_pocitac == 'x'
else:
symbol_pocitac == 'o'
tah_pocitace(pole,symbol_pocitac)
print(pole)
vysledek == vyhodnot(pole)
print(vysledek)
def piskvorky1d(symbol):
symbol_hrac = input('Chces o nebo x?')
pole= '-' * 20
print(pole)
vysledek == '-'
while vysledek =='-':
tah_hrace(pole,symbol_hrac)
print(pole)
if symbol_hrac == 'o':
symbol_pocitac == 'x'
else:
symbol_pocitac == 'o'
tah_pocitace(pole,symbol_pocitac)
print(pole)
vysledek == vyhodnot(pole)
print(vysledek)
Explanation: Chyby v programu a jejich řešení
Nejdříve chyby v syntaxi, na které si Python stěžuje ihned po přečtení programu.
End of explanation
def piskvorky1d(symbol):
symbol_hrac = input('Chces o nebo x?')
pole= '-' * 20
print(pole)
vysledek == '-'
while vysledek =='-':
tah_hrace(pole,symbol_hrac)
print(pole)
if symbol_hrac == 'o':
symbol_pocitac == 'x'
else:
symbol_pocitac == 'o'
tah_pocitace(pole,symbol_pocitac)
print(pole)
vysledek == vyhodnot(pole)
print(vysledek)
piskvorky1d()
def piskvorky1d():
symbol_hrac = input('Chces o nebo x?')
pole= '-' * 20
print(pole)
vysledek == '-'
while vysledek =='-':
tah_hrace(pole,symbol_hrac)
print(pole)
if symbol_hrac == 'o':
symbol_pocitac == 'x'
else:
symbol_pocitac == 'o'
tah_pocitace(pole,symbol_pocitac)
print(pole)
vysledek == vyhodnot(pole)
print(vysledek)
piskvorky1d()
def piskvorky1d():
symbol_hrac = input('Chces o nebo x?')
pole= '-' * 20
print(pole)
vysledek = '-'
while vysledek =='-':
tah_hrace(pole,symbol_hrac)
print(pole)
if symbol_hrac == 'o':
symbol_pocitac == 'x'
else:
symbol_pocitac == 'o'
tah_pocitace(pole,symbol_pocitac)
print(pole)
vysledek == vyhodnot(pole)
print(vysledek)
piskvorky1d()
Explanation: Pro řešení těch dalších už musíme kód v naší funkci spustit, jinak se o jeho chování nic nedozvíme.
End of explanation
def vyhodnot(pole):
while True:
if pole =
Explanation: Neuložené soubory
End of explanation |
8,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Description
Step1: Init
Step2: Determining the probability of detecting the taxa across the entire gradient
Step3: skewed normal distribution
Step4: small uniform distribution
Step5: Notes
Even with fragment sizes of 1-2 kb, the taxa would likely not be detected even if the gradient contained 1e9 16S copies of the taxon.
Does this make sense based on the theory of diffusion used?
with DBL 'smearing'
Determining the probability of detecting in all fragments
skewed normal distribution
Step6: Notes
Even if 1% of DNA is in DBL (that then diffuses back into the gradient)
Step7: with DBL 'smearing' (smaller DBL)
Determining the probability of detecting in all fragments
skewed normal distribution
Step8: DBL with abundance-weighted smearing
Step9: Plotting pre-frac abundance vs heavy fraction P | Python Code:
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome3/validation/'
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
figDir = '/home/nick/notebook/SIPSim/figures/'
nprocs = 3
Explanation: Description:
For emperical data, most taxa (>0.1% abundance) are detected across the entire gradient.
Checking whether a similar pattern is seen with the simulated genome data
Setting variables
End of explanation
import os
import numpy as np
import dill
import pandas as pd
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(plyr)
library(dplyr)
library(tidyr)
library(gridExtra)
if not os.path.isdir(workDir):
os.makedirs(workDir)
Explanation: Init
End of explanation
# max 13C shift
max_13C_shift_in_BD = 0.036
# min BD (that we care about)
min_GC = 13.5
min_BD = min_GC/100.0 * 0.098 + 1.66
# max BD (that we care about)
max_GC = 80
max_BD = max_GC / 100.0 * 0.098 + 1.66 # 80.0% G+C
max_BD = max_BD + max_13C_shift_in_BD
## BD range of values
BD_vals = np.arange(min_BD, max_BD, 0.001)
Explanation: Determining the probability of detecting the taxa across the entire gradient
End of explanation
F = os.path.join(workDir, 'ampFrags_real_kde_dif.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for k,v in kde.items():
pdf[k] = v.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-9)
p1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p1 + scale_y_log10()
grid.arrange(p1, p2, ncol=2)
Explanation: skewed normal distribution
End of explanation
F = os.path.join(workDir, 'ampFrags_sm_kde_dif.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for k,v in kde.items():
pdf[k] = v.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-9)
p1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p1 + scale_y_log10()
grid.arrange(p1, p2, ncol=2)
Explanation: small uniform distribution
End of explanation
BD_vals = np.arange(min_BD, max_BD, 0.001)
F = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for k,v in kde.items():
pdf[k] = v.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-9)
p1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p1 + scale_y_log10()
grid.arrange(p1, p2, ncol=2)
Explanation: Notes
Even with fragment sizes of 1-2 kb, the taxa would likely not be detected even if the gradient contained 1e9 16S copies of the taxon.
Does this make sense based on the theory of diffusion used?
with DBL 'smearing'
Determining the probability of detecting in all fragments
skewed normal distribution
End of explanation
BD_vals = np.arange(min_BD, max_BD, 0.001)
F = os.path.join(workDir, 'ampFrags_sm_kde_dif_DBL.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for k,v in kde.items():
pdf[k] = v.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-9)
p1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p1 + scale_y_log10()
grid.arrange(p1, p2, ncol=2)
Explanation: Notes
Even if 1% of DNA is in DBL (that then diffuses back into the gradient):
the probably of detecting a taxa in all the gradient positions is >= 1e-7
this is feasible for matching the emperical data!
small fragment size distribution
End of explanation
BD_vals = np.arange(min_BD, max_BD, 0.001)
F = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL_fa1e-4.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for k,v in kde.items():
pdf[k] = v.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-9)
p1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p1 + scale_y_log10()
grid.arrange(p1, p2, ncol=2)
Explanation: with DBL 'smearing' (smaller DBL)
Determining the probability of detecting in all fragments
skewed normal distribution
End of explanation
BD_vals = np.arange(min_BD, max_BD, 0.001)
F = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL-comm.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for libID,v in kde.items():
for taxon,k in v.items():
pdf[taxon] = k.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-9)
p1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p1 + scale_y_log10()
grid.arrange(p1, p2, ncol=2)
%%R
df.g %>%
group_by(taxon_name) %>%
summarize(max_P = max(P),
min_P = min(P)) %>% print
Explanation: DBL with abundance-weighted smearing
End of explanation
%%R -i workDir
F = file.path(workDir, 'comm.txt')
df.comm = read.delim(F, sep='\t') %>%
mutate(rel_abund = rel_abund_perc / 100)
df.comm %>% print
df.g.s = df.g %>%
filter(BD > 1.75) %>%
group_by(BD) %>%
mutate(P_rel_abund = P / sum(P)) %>%
group_by(taxon_name) %>%
summarize(mean_P = mean(P))
df.g.s = inner_join(df.g.s, df.comm, c('taxon_name' = 'taxon_name'))
df.g.s %>% print
ggplot(df.g.s, aes(rel_abund, mean_P)) +
geom_point() +
geom_line()
Explanation: Plotting pre-frac abundance vs heavy fraction P
End of explanation |
8,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Porting Bike-Sharing project-1 to RNN
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Lets build the get_batches
the input of each time step will be one row of data ( one hour prediction array of 53 features)
Step8: Lets test the batches above
Step9: Build the network
Lets build a RNN with tensorflow
Inputs
Step10: Lstm
Step11: Output
Step12: Validation Accuracy
Step13: Trainning
Step14: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step15: Create a graph to compare the data and predictions | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Porting Bike-Sharing project-1 to RNN
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
train_targets.head()
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
# each element of x is an array with 53 features and each element of y is an array with 3 targets
# each x is one hour features
def get_batches(x, y, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
array x and array y: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of hours per batch and number of batches we can make
hours_per_batch = n_seqs * n_steps
n_batches = len(x)//hours_per_batch
# convert from Pandas to np remove the index column
x = x.reset_index().values[:,1:]
y = y.reset_index().values[:,1:]
# make only full batches
x, y = x[:n_batches*hours_per_batch], y[:n_batches*hours_per_batch]
# TODO: this needs to be optmized
# x_temp will be ( n rows x n_steps wide) where each element is an array of 53 features
# this first look splits the x with n rows and n_steps wide
x_temp = []
y_temp = []
for st in range(0, n_batches*hours_per_batch, n_steps ):
x_temp.append( x[st:st+n_steps] )
y_temp.append( y[st:st+n_steps] )
x = np.asarray(x_temp )
y = np.asarray(y_temp )
# this splits x in n_seqs rows so the return is a batch of n_seqs rows with n_steps wide
# where each element is an array of 53 features (one hour from our data)
for sq in range(0,(n_batches*hours_per_batch)//n_steps, n_seqs ):
yield x[sq:sq+n_seqs,:,:], y[sq:sq+n_seqs,:,:]
Explanation: Lets build the get_batches
the input of each time step will be one row of data ( one hour prediction array of 53 features)
End of explanation
print(train_features.tail())
batches = get_batches(train_features, train_targets, 20, 96)
x, y = next(batches)
print(x.shape)
# x, y = next(batches)
# print(x.shape)
Explanation: Lets test the batches above
End of explanation
import tensorflow as tf
num_features = 56
num_targets = 3
batch_size = 10
# one step for each hour that we want the sequence to remember
num_steps = 50
lstm_size = 256
num_layers = 2
learning_rate = 0.0005
keep_prob_val = 0.75
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.float32, [batch_size, None, num_features], name='inputs')
targets = tf.placeholder(tf.float32, [batch_size, None, num_targets], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
learningRate = tf.placeholder(tf.float32, name='learningRate')
Explanation: Build the network
Lets build a RNN with tensorflow
Inputs
End of explanation
# # Use a basic LSTM cell
# lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# # Add dropout to the cell
# drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# # Stack up multiple LSTM layers, for deep learning
# #cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
# initial_state = cell.zero_state(batch_size, tf.float32)
#Replaced the code above because TF with GPU was complaining
def lstm_cell():
cell = tf.contrib.rnn.BasicLSTMCell(lstm_size)
return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)], state_is_tuple = True)
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: Lstm
End of explanation
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
# this is one thing that I still dont fully understood, I had to set the activation_fn=None so the
# fully connected layer dont use any activation funcition, this seems to work
predictions = tf.contrib.layers.fully_connected(outputs, 3, activation_fn=None)
cost = tf.losses.mean_squared_error(targets, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
End of explanation
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), tf.cast(tf.round(targets), tf.int32))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation Accuracy
End of explanation
epochs = 100
saver = tf.train.Saver()
#validation accuracy to plot
val_accuracy=[]
training_loss=[]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_features, train_targets, batch_size, num_steps), 1):
feed = {inputs: x,
targets: y,
keep_prob: keep_prob_val,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
training_loss.append(loss)
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_features, val_targets, batch_size, num_steps):
feed = {inputs: x,
targets: y,
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
val_accuracy.append( np.mean(val_acc) )
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/bike-sharing.ckpt")
plt.plot(val_accuracy, label='Accuracy')
plt.legend()
_ = plt.ylim()
plt.plot(training_loss, label='Loss')
plt.legend()
_ = plt.ylim()
Explanation: Trainning
End of explanation
test_acc = []
#with tf.Session(graph=graph) as sess:
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_features, test_targets, batch_size, num_steps), 1):
feed = {inputs: x,
targets: y,
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
batch = get_batches(test_features, test_targets, batch_size, num_steps)
x,y = next(batch)
feed = {inputs: x,
targets: y,
keep_prob: 1,
initial_state: test_state}
pred = sess.run([predictions], feed_dict=feed)
pred = pred[0].reshape(500,-1)
pred[:,0] *= std
pred[:,0] += mean
lf = pred[:,0]
# predictions = network.run(test_features).T*std + mean
ax.plot(lf, label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(lf))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Create a graph to compare the data and predictions
End of explanation |
8,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-1', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
8,389 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
make the train_pivot, duplicate exist when index = ['Cliente','Producto']
for each cliente & producto, first find its most common Agencia_ID, Canal_ID, Ruta_SAK
Step1: make pivot table of test
Step2: groupby use Agencia_ID, Ruta_SAK, Cliente_ID, Producto_ID
Step3: if predict week 8, use data from 3,4,5,6,7
if predict week 9, use data from 3,4,5,6,7
Step4: data for predict week [34567----9], time plus 2 week
Step5: test_for private data, week 11
Step6: for two week ahead 45678 to 10
Step7: data for predict week 8&9, time plus 1 week
train_45678 for 8+1 =9
Step8: train_34567 7+1 = 8
Step9: concat train_pivot_45678_to_9 & train_pivot_34567_to_8 to perform t_plus_1, train_data is over
Step10: prepare for test data, for week 10, we use 5,6,7,8,9
Step11: begin predict for week 11
train_3456 for 6+2 = 8
Step12: train_4567 for 7 + 2 = 9
Step13: concat
Step14: for test data week 11, we use 6,7,8,9
Step15: over
Step16: create time feature
Step17: fit mean feature on target
Step18: add dummy feature
Step19: add product feature
Step20: add town feature
Step21: begin xgboost training
Step22: for 1 week later
cv rmse 0.451181 with dummy canal, time regr,
cv rmse 0.450972 without dummy canal, time regr,
cv rmse 0.4485676 without dummy canal, time regr, producto info
cv rmse 0.4487434 without dummy canal, time regr, producto info, cliente_per_town
for 2 week later
cv rmse 0.4513236 without dummy canal, time regr, producto info | Python Code:
agencia_for_cliente_producto = train_dataset[['Cliente_ID','Producto_ID'
,'Agencia_ID']].groupby(['Cliente_ID',
'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index()
canal_for_cliente_producto = train_dataset[['Cliente_ID',
'Producto_ID','Canal_ID']].groupby(['Cliente_ID',
'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index()
ruta_for_cliente_producto = train_dataset[['Cliente_ID',
'Producto_ID','Ruta_SAK']].groupby(['Cliente_ID',
'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index()
gc.collect()
agencia_for_cliente_producto.to_pickle('agencia_for_cliente_producto.csv')
canal_for_cliente_producto.to_pickle('canal_for_cliente_producto.csv')
ruta_for_cliente_producto.to_pickle('ruta_for_cliente_producto.csv')
agencia_for_cliente_producto = pd.read_pickle('agencia_for_cliente_producto.csv')
canal_for_cliente_producto = pd.read_pickle('canal_for_cliente_producto.csv')
ruta_for_cliente_producto = pd.read_pickle('ruta_for_cliente_producto.csv')
# train_dataset['log_demand'] = train_dataset['Demanda_uni_equil'].apply(np.log1p)
pivot_train = pd.pivot_table(data= train_dataset[['Cliente_ID','Producto_ID','log_demand','Semana']],
values='log_demand', index=['Cliente_ID','Producto_ID'],
columns=['Semana'], aggfunc=np.mean,fill_value = 0).reset_index()
pivot_train.head()
pivot_train = pd.merge(left = pivot_train, right = agencia_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID'])
pivot_train = pd.merge(left = pivot_train, right = canal_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID'])
pivot_train = pd.merge(left = pivot_train, right = ruta_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID'])
pivot_train.to_pickle('pivot_train_with_zero.pickle')
pivot_train = pd.read_pickle('pivot_train_with_zero.pickle')
pivot_train.to_pickle('pivot_train_with_nan.pickle')
pivot_train = pd.read_pickle('pivot_train_with_nan.pickle')
pivot_train = pivot_train.rename(columns={3: 'Sem3', 4: 'Sem4',5: 'Sem5', 6: 'Sem6',7: 'Sem7', 8: 'Sem8',9: 'Sem9'})
pivot_train.head()
pivot_train.columns.values
Explanation: make the train_pivot, duplicate exist when index = ['Cliente','Producto']
for each cliente & producto, first find its most common Agencia_ID, Canal_ID, Ruta_SAK
End of explanation
test_dataset = pd.read_csv('origin/test.csv')
test_dataset.head()
test_dataset[test_dataset['Semana'] == 10].shape
test_dataset[test_dataset['Semana'] == 11].shape
pivot_test = pd.merge(left=pivot_train, right = test_dataset[['id','Cliente_ID','Producto_ID','Semana']],
on =['Cliente_ID','Producto_ID'],how = 'inner' )
pivot_test.head()
pivot_test_new = pd.merge(pivot_train[['Cliente_ID', 'Producto_ID', 'Sem3', 'Sem4', 'Sem5', 'Sem6', 'Sem7',
'Sem8', 'Sem9']],right = test_dataset, on = ['Cliente_ID','Producto_ID'],how = 'right')
pivot_test_new.head()
pivot_test_new.to_pickle('pivot_test.pickle')
pivot_test.to_pickle('pivot_test.pickle')
pivot_test = pd.read_pickle('pivot_test.pickle')
pivot_test.head()
Explanation: make pivot table of test
End of explanation
train_dataset.head()
import itertools
col_list = ['Agencia_ID', 'Ruta_SAK', 'Cliente_ID', 'Producto_ID']
all_combine = itertools.combinations(col_list,2)
list_2element_combine = [list(tuple) for tuple in all_combine]
col_1elm_2elm = col_list + list_2element_combine
col_1elm_2elm
train_dataset_test = train_dataset[train_dataset['Semana'] < 8].copy()
Explanation: groupby use Agencia_ID, Ruta_SAK, Cliente_ID, Producto_ID
End of explanation
def categorical_useful(train_dataset,pivot_train):
# if is_train:
# train_dataset_test = train_dataset[train_dataset['Semana'] < 8].copy()
# elif is_train == False:
train_dataset_test = train_dataset.copy()
log_demand_by_agen = train_dataset_test[['Agencia_ID','log_demand']].groupby('Agencia_ID').mean().reset_index()
log_demand_by_ruta = train_dataset_test[['Ruta_SAK','log_demand']].groupby('Ruta_SAK').mean().reset_index()
log_demand_by_cliente = train_dataset_test[['Cliente_ID','log_demand']].groupby('Cliente_ID').mean().reset_index()
log_demand_by_producto = train_dataset_test[['Producto_ID','log_demand']].groupby('Producto_ID').mean().reset_index()
log_demand_by_agen_ruta = train_dataset_test[['Agencia_ID', 'Ruta_SAK',
'log_demand']].groupby(['Agencia_ID', 'Ruta_SAK']).mean().reset_index()
log_demand_by_agen_cliente = train_dataset_test[['Agencia_ID', 'Cliente_ID',
'log_demand']].groupby(['Agencia_ID', 'Cliente_ID']).mean().reset_index()
log_demand_by_agen_producto = train_dataset_test[['Agencia_ID', 'Producto_ID',
'log_demand']].groupby(['Agencia_ID', 'Producto_ID']).mean().reset_index()
log_demand_by_ruta_cliente = train_dataset_test[['Ruta_SAK', 'Cliente_ID',
'log_demand']].groupby(['Ruta_SAK', 'Cliente_ID']).mean().reset_index()
log_demand_by_ruta_producto = train_dataset_test[['Ruta_SAK', 'Producto_ID',
'log_demand']].groupby(['Ruta_SAK', 'Producto_ID']).mean().reset_index()
log_demand_by_cliente_producto = train_dataset_test[['Cliente_ID', 'Producto_ID',
'log_demand']].groupby(['Cliente_ID', 'Producto_ID']).mean().reset_index()
log_demand_by_cliente_producto_agen = train_dataset_test[[
'Cliente_ID','Producto_ID','Agencia_ID','log_demand']].groupby(['Cliente_ID',
'Agencia_ID','Producto_ID']).mean().reset_index()
log_sum_by_cliente = train_dataset_test[['Cliente_ID','log_demand']].groupby('Cliente_ID').sum().reset_index()
ruta_freq_semana = train_dataset[['Semana','Ruta_SAK']].groupby(['Ruta_SAK']).count().reset_index()
clien_freq_semana = train_dataset[['Semana','Cliente_ID']].groupby(['Cliente_ID']).count().reset_index()
agen_freq_semana = train_dataset[['Semana','Agencia_ID']].groupby(['Agencia_ID']).count().reset_index()
prod_freq_semana = train_dataset[['Semana','Producto_ID']].groupby(['Producto_ID']).count().reset_index()
pivot_train = pd.merge(left = pivot_train,right = ruta_freq_semana,
how = 'left', on = ['Ruta_SAK']).rename(columns={'Semana': 'ruta_freq'})
pivot_train = pd.merge(left = pivot_train,right = clien_freq_semana,
how = 'left', on = ['Cliente_ID']).rename(columns={'Semana': 'clien_freq'})
pivot_train = pd.merge(left = pivot_train,right = agen_freq_semana,
how = 'left', on = ['Agencia_ID']).rename(columns={'Semana': 'agen_freq'})
pivot_train = pd.merge(left = pivot_train,right = prod_freq_semana,
how = 'left', on = ['Producto_ID']).rename(columns={'Semana': 'prod_freq'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_agen,
how = 'left', on = ['Agencia_ID']).rename(columns={'log_demand': 'agen_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_ruta,
how = 'left', on = ['Ruta_SAK']).rename(columns={'log_demand': 'ruta_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_cliente,
how = 'left', on = ['Cliente_ID']).rename(columns={'log_demand': 'cliente_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_producto,
how = 'left', on = ['Producto_ID']).rename(columns={'log_demand': 'producto_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_agen_ruta,
how = 'left', on = ['Agencia_ID', 'Ruta_SAK']).rename(columns={'log_demand': 'agen_ruta_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_agen_cliente,
how = 'left', on = ['Agencia_ID', 'Cliente_ID']).rename(columns={'log_demand': 'agen_cliente_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_agen_producto,
how = 'left', on = ['Agencia_ID', 'Producto_ID']).rename(columns={'log_demand': 'agen_producto_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_ruta_cliente,
how = 'left', on = ['Ruta_SAK', 'Cliente_ID']).rename(columns={'log_demand': 'ruta_cliente_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_ruta_producto,
how = 'left', on = ['Ruta_SAK', 'Producto_ID']).rename(columns={'log_demand': 'ruta_producto_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_cliente_producto,
how = 'left', on = ['Cliente_ID', 'Producto_ID']).rename(columns={'log_demand': 'cliente_producto_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_sum_by_cliente,
how = 'left', on = ['Cliente_ID']).rename(columns={'log_demand': 'cliente_for_log_sum'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_cliente_producto_agen,
how = 'left', on = ['Cliente_ID', 'Producto_ID',
'Agencia_ID']).rename(columns={'log_demand': 'cliente_producto_agen_for_log_sum'})
pivot_train['corr'] = pivot_train['producto_for_log_de'] * pivot_train['cliente_for_log_de'] / train_dataset_test['log_demand'].median()
return pivot_train
def define_time_features(df, to_predict = 't_plus_1' , t_0 = 8):
if(to_predict == 't_plus_1' ):
df['t_min_1'] = df['Sem'+str(t_0-1)]
if(to_predict == 't_plus_2' ):
df['t_min_6'] = df['Sem'+str(t_0-6)]
df['t_min_2'] = df['Sem'+str(t_0-2)]
df['t_min_3'] = df['Sem'+str(t_0-3)]
df['t_min_4'] = df['Sem'+str(t_0-4)]
df['t_min_5'] = df['Sem'+str(t_0-5)]
if(to_predict == 't_plus_1' ):
df['t1_min_t2'] = df['t_min_1'] - df['t_min_2']
df['t1_min_t3'] = df['t_min_1'] - df['t_min_3']
df['t1_min_t4'] = df['t_min_1'] - df['t_min_4']
df['t1_min_t5'] = df['t_min_1'] - df['t_min_5']
if(to_predict == 't_plus_2' ):
df['t2_min_t6'] = df['t_min_2'] - df['t_min_6']
df['t3_min_t6'] = df['t_min_3'] - df['t_min_6']
df['t4_min_t6'] = df['t_min_4'] - df['t_min_6']
df['t5_min_t6'] = df['t_min_5'] - df['t_min_6']
df['t2_min_t3'] = df['t_min_2'] - df['t_min_3']
df['t2_min_t4'] = df['t_min_2'] - df['t_min_4']
df['t2_min_t5'] = df['t_min_2'] - df['t_min_5']
df['t3_min_t4'] = df['t_min_3'] - df['t_min_4']
df['t3_min_t5'] = df['t_min_3'] - df['t_min_5']
df['t4_min_t5'] = df['t_min_4'] - df['t_min_5']
return df
def lin_regr(row, to_predict, t_0, semanas_numbers):
row = row.copy()
row.index = semanas_numbers
row = row.dropna()
if(len(row>2)):
X = np.ones(shape=(len(row), 2))
X[:,1] = row.index
y = row.values
regr = linear_model.LinearRegression()
regr.fit(X, y)
if(to_predict == 't_plus_1'):
return regr.predict([[1,t_0+1]])[0]
elif(to_predict == 't_plus_2'):
return regr.predict([[1,t_0+2]])[0]
else:
return None
def lin_regr_features(pivot_df,to_predict, semanas_numbers,t_0):
pivot_df = pivot_df.copy()
semanas_names = ['Sem%i' %i for i in semanas_numbers]
columns = ['Sem%i' %i for i in semanas_numbers]
columns.append('Producto_ID')
pivot_grouped = pivot_df[columns].groupby('Producto_ID').aggregate('mean')
pivot_grouped['LR_prod'] = np.zeros(len(pivot_grouped))
pivot_grouped['LR_prod'] = pivot_grouped[semanas_names].apply(lin_regr, axis = 1,
to_predict = to_predict,
t_0 = t_0, semanas_numbers = semanas_numbers )
pivot_df = pd.merge(pivot_df, pivot_grouped[['LR_prod']], how='left', left_on = 'Producto_ID', right_index=True)
pivot_df['LR_prod_corr'] = pivot_df['LR_prod'] * pivot_df['cliente_for_log_sum'] / 100
return pivot_df
cliente_tabla = pd.read_csv('origin/cliente_tabla.csv')
town_state = pd.read_csv('origin/town_state.csv')
town_state['town_id'] = town_state['Town'].str.split()
town_state['town_id'] = town_state['Town'].str.split(expand = True)
def add_pro_info(dataset):
train_basic_feature = dataset[['Cliente_ID','Producto_ID','Agencia_ID']].copy()
train_basic_feature.drop_duplicates(inplace = True)
cliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' )
# print cliente_per_town.shape
cliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' )
# print cliente_per_town.shape
cliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index()
# print cliente_per_town_count.head()
cliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','town_id','Agencia_ID']],
cliente_per_town_count,on = 'town_id',how = 'inner')
# print cliente_per_town_count_final.head()
cliente_per_town_count_final.drop_duplicates(inplace = True)
dataset_final = pd.merge(dataset,cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente','Agencia_ID']],
on = ['Cliente_ID','Producto_ID','Agencia_ID'],how = 'left')
return dataset_final
pre_product = pd.read_csv('preprocessed_products.csv',index_col = 0)
pre_product['weight_per_piece'] = pd.to_numeric(pre_product['weight_per_piece'], errors='coerce')
pre_product['weight'] = pd.to_numeric(pre_product['weight'], errors='coerce')
pre_product['pieces'] = pd.to_numeric(pre_product['pieces'], errors='coerce')
def add_product(dataset):
dataset = pd.merge(dataset,pre_product[['ID','weight','weight_per_piece','pieces']],
left_on = 'Producto_ID',right_on = 'ID',how = 'left')
return dataset
Explanation: if predict week 8, use data from 3,4,5,6,7
if predict week 9, use data from 3,4,5,6,7
End of explanation
train_34567 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6,7]), :].copy()
train_pivot_34567_to_9 = pivot_train_zero.loc[(pivot_train['Sem9'].notnull()),:].copy()
train_pivot_34567_to_9 = categorical_useful(train_34567,train_pivot_34567_to_9)
del train_34567
gc.collect()
train_pivot_34567_to_9 = define_time_features(train_pivot_34567_to_9, to_predict = 't_plus_2' , t_0 = 9)
train_pivot_34567_to_9 = lin_regr_features(train_pivot_34567_to_9,to_predict ='t_plus_2',
semanas_numbers = [3,4,5,6,7],t_0 = 9)
train_pivot_34567_to_9['target'] = train_pivot_34567_to_9['Sem9']
train_pivot_34567_to_9.drop(['Sem8','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_34567_to_9[['Sem3','Sem4','Sem5','Sem6','Sem7']].cumsum(axis = 1)
train_pivot_34567_to_9.drop(['Sem3','Sem4','Sem5','Sem6','Sem7'],axis =1,inplace = True)
train_pivot_34567_to_9 = pd.concat([train_pivot_34567_to_9,train_pivot_cum_sum],axis =1)
train_pivot_34567_to_9 = train_pivot_34567_to_9.rename(columns={'Sem3': 't_m_6_cum',
'Sem4': 't_m_5_cum','Sem5': 't_m_4_cum',
'Sem6': 't_m_3_cum','Sem7': 't_m_2_cum'})
# add geo_info
train_pivot_34567_to_9 = add_pro_info(train_pivot_34567_to_9)
#add product info
train_pivot_34567_to_9 = add_product(train_pivot_34567_to_9)
train_pivot_34567_to_9.drop(['ID'],axis = 1,inplace = True)
gc.collect()
train_pivot_34567_to_9.head()
train_pivot_34567_to_9.columns.values
len(train_pivot_34567_to_9.columns.values)
train_pivot_34567_to_9.to_csv('train_pivot_34567_to_9.csv')
train_pivot_34567_to_9 = pd.read_csv('train_pivot_34567_to_9.csv',index_col = 0)
Explanation: data for predict week [34567----9], time plus 2 week
End of explanation
pivot_test.head()
pivot_test_week11 = pivot_test.loc[pivot_test['sem10_sem11'] == 11]
pivot_test_week11.reset_index(drop=True,inplace = True)
pivot_test_week11 = pivot_test_week11.fillna(0)
pivot_test_week11.head()
pivot_test_week11.shape
train_56789 = train_dataset.loc[train_dataset['Semana'].isin([5,6,7,8,9]), :].copy()
train_pivot_56789_to_11 = pivot_test_week11.copy()
train_pivot_56789_to_11 = categorical_useful(train_56789,train_pivot_56789_to_11)
del train_56789
gc.collect()
train_pivot_56789_to_11 = define_time_features(train_pivot_56789_to_11, to_predict = 't_plus_2' , t_0 = 11)
train_pivot_56789_to_11 = lin_regr_features(train_pivot_56789_to_11,to_predict ='t_plus_2' ,
semanas_numbers = [5,6,7,8,9],t_0 = 9)
train_pivot_56789_to_11.drop(['Sem3','Sem4'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_56789_to_11[['Sem5','Sem6','Sem7','Sem8','Sem9']].cumsum(axis = 1)
train_pivot_56789_to_11.drop(['Sem5','Sem6','Sem7','Sem8','Sem9'],axis =1,inplace = True)
train_pivot_56789_to_11 = pd.concat([train_pivot_56789_to_11,train_pivot_cum_sum],axis =1)
train_pivot_56789_to_11 = train_pivot_56789_to_11.rename(columns={'Sem5': 't_m_6_cum',
'Sem6': 't_m_5_cum','Sem7': 't_m_4_cum',
'Sem8': 't_m_3_cum','Sem9': 't_m_2_cum'})
# add product_info
train_pivot_56789_to_11 = add_pro_info(train_pivot_56789_to_11)
#
train_pivot_56789_to_11 = add_product(train_pivot_56789_to_11)
train_pivot_56789_to_11.drop(['ID'],axis =1,inplace = True)
for col in train_pivot_56789_to_11.columns.values:
train_pivot_56789_to_11[col] = train_pivot_56789_to_11[col].astype(np.float32)
train_pivot_56789_to_11.head()
train_pivot_56789_to_11.columns.values
train_pivot_56789_to_11.shape
new_feature = ['id', 'ruta_freq', 'clien_freq', 'agen_freq',
'prod_freq', 'agen_for_log_de', 'ruta_for_log_de',
'cliente_for_log_de', 'producto_for_log_de', 'agen_ruta_for_log_de',
'agen_cliente_for_log_de', 'agen_producto_for_log_de',
'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',
'cliente_producto_for_log_de', 'cliente_for_log_sum',
'cliente_producto_agen_for_log_sum', 'corr', 't_min_6', 't_min_2',
't_min_3', 't_min_4', 't_min_5', 't2_min_t6', 't3_min_t6',
't4_min_t6', 't5_min_t6', 't2_min_t3', 't2_min_t4', 't2_min_t5',
't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',
't_m_6_cum', 't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum',
'NombreCliente', 'weight', 'weight_per_piece', 'pieces']
len(new_feature)
train_pivot_56789_to_11 = train_pivot_56789_to_11[new_feature]
train_pivot_56789_to_11.head()
train_pivot_56789_to_11['id'] = train_pivot_56789_to_11['id'].astype(int)
train_pivot_56789_to_11.head()
train_pivot_56789_to_11.to_csv('train_pivot_56789_to_11_private.csv',index = False)
Explanation: test_for private data, week 11
End of explanation
pivot_test_week10 = pivot_test.loc[pivot_test['sem10_sem11'] == 10]
pivot_test_week10.reset_index(drop=True,inplace = True)
pivot_test_week10 = pivot_test_week10.fillna(0)
pivot_test_week10.head()
train_45678 = train_dataset.loc[train_dataset['Semana'].isin([4,5,6,7,8]), :].copy()
train_pivot_45678_to_10 = pivot_test_week10.copy()
train_pivot_45678_to_10 = categorical_useful(train_45678,train_pivot_45678_to_10)
del train_45678
gc.collect()
train_pivot_45678_to_10 = define_time_features(train_pivot_45678_to_10, to_predict = 't_plus_2' , t_0 = 10)
train_pivot_45678_to_10 = lin_regr_features(train_pivot_45678_to_10,to_predict ='t_plus_2' ,
semanas_numbers = [4,5,6,7,8],t_0 = 8)
train_pivot_45678_to_10.drop(['Sem3','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_45678_to_10[['Sem4','Sem5','Sem6','Sem7','Sem8']].cumsum(axis = 1)
train_pivot_45678_to_10.drop(['Sem4','Sem5','Sem6','Sem7','Sem8'],axis =1,inplace = True)
train_pivot_45678_to_10 = pd.concat([train_pivot_45678_to_10,train_pivot_cum_sum],axis =1)
train_pivot_45678_to_10 = train_pivot_45678_to_10.rename(columns={'Sem4': 't_m_6_cum',
'Sem5': 't_m_5_cum','Sem6': 't_m_4_cum',
'Sem7': 't_m_3_cum','Sem8': 't_m_2_cum'})
# add product_info
train_pivot_45678_to_10 = add_pro_info(train_pivot_45678_to_10)
#
train_pivot_45678_to_10 = add_product(train_pivot_45678_to_10)
train_pivot_45678_to_10.drop(['ID'],axis =1,inplace = True)
for col in train_pivot_45678_to_10.columns.values:
train_pivot_45678_to_10[col] = train_pivot_45678_to_10[col].astype(np.float32)
train_pivot_45678_to_10.head()
train_pivot_45678_to_10.columns.values
train_pivot_45678_to_10 = train_pivot_45678_to_10[new_feature]
train_pivot_45678_to_10['id'] = train_pivot_45678_to_10['id'].astype(int)
train_pivot_45678_to_10.head()
train_pivot_45678_to_10.to_pickle('validation_45678_10.pickle')
Explanation: for two week ahead 45678 to 10
End of explanation
train_45678 = train_dataset.loc[train_dataset['Semana'].isin([4,5,6,7,8]), :].copy()
train_pivot_45678_to_9 = pivot_train_zero.loc[(pivot_train['Sem9'].notnull()),:].copy()
train_pivot_45678_to_9 = categorical_useful(train_45678,train_pivot_45678_to_9)
train_pivot_45678_to_9 = define_time_features(train_pivot_45678_to_9, to_predict = 't_plus_1' , t_0 = 9)
del train_45678
gc.collect()
train_pivot_45678_to_9 = lin_regr_features(train_pivot_45678_to_9,to_predict ='t_plus_1',
semanas_numbers = [4,5,6,7,8],t_0 = 8)
train_pivot_45678_to_9['target'] = train_pivot_45678_to_9['Sem9']
train_pivot_45678_to_9.drop(['Sem3','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_45678_to_9[['Sem4','Sem5','Sem6','Sem7','Sem8']].cumsum(axis = 1)
train_pivot_45678_to_9.drop(['Sem4','Sem5','Sem6','Sem7','Sem8'],axis =1,inplace = True)
train_pivot_45678_to_9 = pd.concat([train_pivot_45678_to_9,train_pivot_cum_sum],axis =1,copy = False)
train_pivot_45678_to_9 = train_pivot_45678_to_9.rename(columns={'Sem4': 't_m_5_cum',
'Sem5': 't_m_4_cum','Sem6': 't_m_3_cum', 'Sem7': 't_m_2_cum','Sem8': 't_m_1_cum'})
# add geo_info
train_pivot_45678_to_9 = add_pro_info(train_pivot_45678_to_9)
#add product info
train_pivot_45678_to_9 = add_product(train_pivot_45678_to_9)
train_pivot_45678_to_9.drop(['ID'],axis = 1,inplace = True)
for col in train_pivot_45678_to_9.columns.values:
train_pivot_45678_to_9[col] = train_pivot_45678_to_9[col].astype(np.float32)
gc.collect()
train_pivot_45678_to_9.head()
train_pivot_45678_to_9.columns.values
train_pivot_45678_to_9 = train_pivot_45678_to_9[['ruta_freq', 'clien_freq', 'agen_freq', 'prod_freq',
'agen_for_log_de', 'ruta_for_log_de', 'cliente_for_log_de',
'producto_for_log_de', 'agen_ruta_for_log_de',
'agen_cliente_for_log_de', 'agen_producto_for_log_de',
'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',
'cliente_producto_for_log_de', 'cliente_for_log_sum',
'cliente_producto_agen_for_log_sum', 'corr', 't_min_1', 't_min_2',
't_min_3', 't_min_4', 't_min_5', 't1_min_t2', 't1_min_t3',
't1_min_t4', 't1_min_t5', 't2_min_t3', 't2_min_t4', 't2_min_t5',
't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',
'target', 't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum',
't_m_1_cum', 'NombreCliente', 'weight', 'weight_per_piece', 'pieces']]
train_pivot_45678_to_9.shape
train_pivot_45678_to_9.to_csv('train_pivot_45678_to_9_whole_zero.csv')
# train_pivot_45678_to_9_old = pd.read_csv('train_pivot_45678_to_9.csv',index_col = 0)
sum(train_pivot_45678_to_9['target'].isnull())
Explanation: data for predict week 8&9, time plus 1 week
train_45678 for 8+1 =9
End of explanation
train_34567 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6,7]), :].copy()
train_pivot_34567_to_8 = pivot_train_zero.loc[(pivot_train['Sem8'].notnull()),:].copy()
train_pivot_34567_to_8 = categorical_useful(train_34567,train_pivot_34567_to_8)
train_pivot_34567_to_8 = define_time_features(train_pivot_34567_to_8, to_predict = 't_plus_1' , t_0 = 8)
del train_34567
gc.collect()
train_pivot_34567_to_8 = lin_regr_features(train_pivot_34567_to_8,to_predict = 't_plus_1',
semanas_numbers = [3,4,5,6,7],t_0 = 7)
train_pivot_34567_to_8['target'] = train_pivot_34567_to_8['Sem8']
train_pivot_34567_to_8.drop(['Sem8','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_34567_to_8[['Sem3','Sem4','Sem5','Sem6','Sem7']].cumsum(axis = 1)
train_pivot_34567_to_8.drop(['Sem3','Sem4','Sem5','Sem6','Sem7'],axis =1,inplace = True)
train_pivot_34567_to_8 = pd.concat([train_pivot_34567_to_8,train_pivot_cum_sum],axis =1)
train_pivot_34567_to_8 = train_pivot_34567_to_8.rename(columns={'Sem3': 't_m_5_cum','Sem4': 't_m_4_cum',
'Sem5': 't_m_3_cum','Sem6': 't_m_2_cum',
'Sem7': 't_m_1_cum'})
# add product_info
train_pivot_34567_to_8 = add_pro_info(train_pivot_34567_to_8)
#add product
train_pivot_34567_to_8 = add_product(train_pivot_34567_to_8)
train_pivot_34567_to_8.drop(['ID'],axis = 1,inplace = True)
for col in train_pivot_34567_to_8.columns.values:
train_pivot_34567_to_8[col] = train_pivot_34567_to_8[col].astype(np.float32)
gc.collect()
train_pivot_34567_to_8.head()
train_pivot_34567_to_8.shape
train_pivot_34567_to_8.columns.values
train_pivot_34567_to_8.to_csv('train_pivot_34567_to_8.csv')
train_pivot_34567_to_8 = pd.read_csv('train_pivot_34567_to_8.csv',index_col = 0)
gc.collect()
Explanation: train_34567 7+1 = 8
End of explanation
train_pivot_xgb_time1 = pd.concat([train_pivot_45678_to_9, train_pivot_34567_to_8],axis = 0,copy = False)
train_pivot_xgb_time1 = train_pivot_xgb_time1[['ruta_freq', 'clien_freq', 'agen_freq', 'prod_freq',
'agen_for_log_de', 'ruta_for_log_de', 'cliente_for_log_de',
'producto_for_log_de', 'agen_ruta_for_log_de',
'agen_cliente_for_log_de', 'agen_producto_for_log_de',
'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',
'cliente_producto_for_log_de', 'cliente_for_log_sum',
'cliente_producto_agen_for_log_sum', 'corr', 't_min_1', 't_min_2',
't_min_3', 't_min_4', 't_min_5', 't1_min_t2', 't1_min_t3',
't1_min_t4', 't1_min_t5', 't2_min_t3', 't2_min_t4', 't2_min_t5',
't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',
'target', 't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum',
't_m_1_cum', 'NombreCliente', 'weight', 'weight_per_piece', 'pieces']]
train_pivot_xgb_time1.columns.values
train_pivot_xgb_time1.shape
np.sum(train_pivot_xgb_time1.memory_usage())/(1024**3)
train_pivot_xgb_time1.to_csv('train_pivot_xgb_time1_44fea_zero.csv',index = False)
train_pivot_xgb_time1.to_csv('train_pivot_xgb_time1.csv')
del train_pivot_xgb_time1
del train_pivot_45678_to_9
del train_pivot_34567_to_8
gc.collect()
Explanation: concat train_pivot_45678_to_9 & train_pivot_34567_to_8 to perform t_plus_1, train_data is over
End of explanation
pivot_test.head()
pivot_test_week10 = pivot_test.loc[pivot_test['sem10_sem11'] == 10]
pivot_test_week10.reset_index(drop=True,inplace = True)
pivot_test_week10 = pivot_test_week10.fillna(0)
pivot_test_week10.head()
pivot_test_week10.shape
train_56789 = train_dataset.loc[train_dataset['Semana'].isin([5,6,7,8,9]), :].copy()
train_pivot_56789_to_10 = pivot_test_week10.copy()
train_pivot_56789_to_10 = categorical_useful(train_56789,train_pivot_56789_to_10)
del train_56789
gc.collect()
train_pivot_56789_to_10 = define_time_features(train_pivot_56789_to_10, to_predict = 't_plus_1' , t_0 = 10)
train_pivot_56789_to_10 = lin_regr_features(train_pivot_56789_to_10,to_predict ='t_plus_1' ,
semanas_numbers = [5,6,7,8,9],t_0 = 9)
train_pivot_56789_to_10.drop(['Sem3','Sem4'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_56789_to_10[['Sem5','Sem6','Sem7','Sem8','Sem9']].cumsum(axis = 1)
train_pivot_56789_to_10.drop(['Sem5','Sem6','Sem7','Sem8','Sem9'],axis =1,inplace = True)
train_pivot_56789_to_10 = pd.concat([train_pivot_56789_to_10,train_pivot_cum_sum],axis =1)
train_pivot_56789_to_10 = train_pivot_56789_to_10.rename(columns={'Sem5': 't_m_5_cum',
'Sem6': 't_m_4_cum','Sem7': 't_m_3_cum',
'Sem8': 't_m_2_cum','Sem9': 't_m_1_cum'})
# add product_info
train_pivot_56789_to_10 = add_pro_info(train_pivot_56789_to_10)
#
train_pivot_56789_to_10 = add_product(train_pivot_56789_to_10)
train_pivot_56789_to_10.drop(['ID'],axis =1,inplace = True)
for col in train_pivot_56789_to_10.columns.values:
train_pivot_56789_to_10[col] = train_pivot_56789_to_10[col].astype(np.float32)
train_pivot_56789_to_10.head()
train_pivot_56789_to_10 = train_pivot_56789_to_10[['id','ruta_freq', 'clien_freq', 'agen_freq',
'prod_freq', 'agen_for_log_de', 'ruta_for_log_de',
'cliente_for_log_de', 'producto_for_log_de', 'agen_ruta_for_log_de',
'agen_cliente_for_log_de', 'agen_producto_for_log_de',
'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',
'cliente_producto_for_log_de', 'cliente_for_log_sum',
'cliente_producto_agen_for_log_sum', 'corr', 't_min_1', 't_min_2',
't_min_3', 't_min_4', 't_min_5', 't1_min_t2', 't1_min_t3',
't1_min_t4', 't1_min_t5', 't2_min_t3', 't2_min_t4', 't2_min_t5',
't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',
't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum', 't_m_1_cum',
'NombreCliente', 'weight', 'weight_per_piece', 'pieces']]
train_pivot_56789_to_10.head()
train_pivot_56789_to_10.shape
len(train_pivot_56789_to_10.columns.values)
train_pivot_56789_to_10.to_pickle('train_pivot_56789_to_10_44fea_zero.pickle')
Explanation: prepare for test data, for week 10, we use 5,6,7,8,9
End of explanation
train_3456 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6]), :].copy()
train_pivot_3456_to_8 = pivot_train.loc[(pivot_train['Sem8'].notnull()),:].copy()
train_pivot_3456_to_8 = categorical_useful(train_3456,train_pivot_3456_to_8)
del train_3456
gc.collect()
train_pivot_3456_to_8 = define_time_features(train_pivot_3456_to_8, to_predict = 't_plus_2' , t_0 = 8)
#notice that the t_0 means different
train_pivot_3456_to_8 = lin_regr_features(train_pivot_3456_to_8,to_predict = 't_plus_2', semanas_numbers = [3,4,5,6],t_0 = 6)
train_pivot_3456_to_8['target'] = train_pivot_3456_to_8['Sem8']
train_pivot_3456_to_8.drop(['Sem7','Sem8','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_3456_to_8[['Sem3','Sem4','Sem5','Sem6']].cumsum(axis = 1)
train_pivot_3456_to_8.drop(['Sem3','Sem4','Sem5','Sem6'],axis =1,inplace = True)
train_pivot_3456_to_8 = pd.concat([train_pivot_3456_to_8,train_pivot_cum_sum],axis =1)
train_pivot_3456_to_8 = train_pivot_3456_to_8.rename(columns={'Sem4': 't_m_4_cum',
'Sem5': 't_m_3_cum','Sem6': 't_m_2_cum', 'Sem3': 't_m_5_cum'})
# add product_info
train_pivot_3456_to_8 = add_pro_info(train_pivot_3456_to_8)
train_pivot_3456_to_8 = add_product(train_pivot_3456_to_8)
train_pivot_3456_to_8.drop(['ID'],axis =1,inplace = True)
train_pivot_3456_to_8.head()
train_pivot_3456_to_8.columns.values
train_pivot_3456_to_8.to_csv('train_pivot_3456_to_8.csv')
Explanation: begin predict for week 11
train_3456 for 6+2 = 8
End of explanation
train_4567 = train_dataset.loc[train_dataset['Semana'].isin([4,5,6,7]), :].copy()
train_pivot_4567_to_9 = pivot_train.loc[(pivot_train['Sem9'].notnull()),:].copy()
train_pivot_4567_to_9 = categorical_useful(train_4567,train_pivot_4567_to_9)
del train_4567
gc.collect()
train_pivot_4567_to_9 = define_time_features(train_pivot_4567_to_9, to_predict = 't_plus_2' , t_0 = 9)
#notice that the t_0 means different
train_pivot_4567_to_9 = lin_regr_features(train_pivot_4567_to_9,to_predict = 't_plus_2',
semanas_numbers = [4,5,6,7],t_0 = 7)
train_pivot_4567_to_9['target'] = train_pivot_4567_to_9['Sem9']
train_pivot_4567_to_9.drop(['Sem3','Sem8','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_4567_to_9[['Sem7','Sem4','Sem5','Sem6']].cumsum(axis = 1)
train_pivot_4567_to_9.drop(['Sem7','Sem4','Sem5','Sem6'],axis =1,inplace = True)
train_pivot_4567_to_9 = pd.concat([train_pivot_4567_to_9,train_pivot_cum_sum],axis =1)
train_pivot_4567_to_9 = train_pivot_4567_to_9.rename(columns={'Sem4': 't_m_5_cum',
'Sem5': 't_m_4_cum','Sem6': 't_m_3_cum', 'Sem7': 't_m_2_cum'})
# add product_info
train_pivot_4567_to_9 = add_pro_info(train_pivot_4567_to_9)
train_pivot_4567_to_9 = add_product(train_pivot_4567_to_9)
train_pivot_4567_to_9.drop(['ID'],axis =1,inplace = True)
train_pivot_4567_to_9.head()
train_pivot_4567_to_9.columns.values
train_pivot_4567_to_9.to_csv('train_pivot_4567_to_9.csv')
Explanation: train_4567 for 7 + 2 = 9
End of explanation
train_pivot_xgb_time2 = pd.concat([train_pivot_3456_to_8, train_pivot_4567_to_9],axis = 0,copy = False)
train_pivot_xgb_time2.columns.values
train_pivot_xgb_time2.shape
train_pivot_xgb_time2.to_csv('train_pivot_xgb_time2_38fea.csv')
train_pivot_xgb_time2 = pd.read_csv('train_pivot_xgb_time2.csv',index_col = 0)
train_pivot_xgb_time2.head()
del train_pivot_3456_to_8
del train_pivot_4567_to_9
del train_pivot_xgb_time2
del train_pivot_34567_to_8
del train_pivot_45678_to_9
del train_pivot_xgb_time1
gc.collect()
Explanation: concat
End of explanation
pivot_test_week11 = pivot_test_new.loc[pivot_test_new['Semana'] == 11]
pivot_test_week11.reset_index(drop=True,inplace = True)
pivot_test_week11.head()
pivot_test_week11.shape
train_6789 = train_dataset.loc[train_dataset['Semana'].isin([6,7,8,9]), :].copy()
train_pivot_6789_to_11 = pivot_test_week11.copy()
train_pivot_6789_to_11 = categorical_useful(train_6789,train_pivot_6789_to_11)
del train_6789
gc.collect()
train_pivot_6789_to_11 = define_time_features(train_pivot_6789_to_11, to_predict = 't_plus_2' , t_0 = 11)
train_pivot_6789_to_11 = lin_regr_features(train_pivot_6789_to_11,to_predict ='t_plus_2' ,
semanas_numbers = [6,7,8,9],t_0 = 9)
train_pivot_6789_to_11.drop(['Sem3','Sem4','Sem5'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_6789_to_11[['Sem6','Sem7','Sem8','Sem9']].cumsum(axis = 1)
train_pivot_6789_to_11.drop(['Sem6','Sem7','Sem8','Sem9'],axis =1,inplace = True)
train_pivot_6789_to_11 = pd.concat([train_pivot_6789_to_11,train_pivot_cum_sum],axis =1)
train_pivot_6789_to_11 = train_pivot_6789_to_11.rename(columns={'Sem6': 't_m_5_cum',
'Sem7': 't_m_4_cum', 'Sem8': 't_m_3_cum','Sem9': 't_m_2_cum'})
# add product_info
train_pivot_6789_to_11 = add_pro_info(train_pivot_6789_to_11)
train_pivot_6789_to_11 = add_product(train_pivot_6789_to_11)
train_pivot_6789_to_11.drop(['ID'],axis = 1,inplace = True)
train_pivot_6789_to_11.head()
train_pivot_6789_to_11.shape
train_pivot_6789_to_11.to_pickle('train_pivot_6789_to_11_new.pickle')
Explanation: for test data week 11, we use 6,7,8,9
End of explanation
% time pivot_train_categorical_useful = categorical_useful(train_dataset,pivot_train,is_train = True)
% time pivot_train_categorical_useful = categorical_useful(train_dataset,pivot_train,is_train = True)
pivot_train_categorical_useful_train.to_csv('pivot_train_categorical_useful_with_nan.csv')
pivot_train_categorical_useful_train = pd.read_csv('pivot_train_categorical_useful_with_nan.csv',index_col = 0)
pivot_train_categorical_useful_train.head()
Explanation: over
End of explanation
pivot_train_categorical_useful.head()
pivot_train_categorical_useful_time = define_time_features(pivot_train_categorical_useful,
to_predict = 't_plus_1' , t_0 = 8)
pivot_train_categorical_useful_time.head()
pivot_train_categorical_useful_time.columns
Explanation: create time feature
End of explanation
# Linear regression features
pivot_train_categorical_useful_time_LR = lin_regr_features(pivot_train_categorical_useful_time, semanas_numbers = [3,4,5,6,7])
pivot_train_categorical_useful_time_LR.head()
pivot_train_categorical_useful_time_LR.columns
pivot_train_categorical_useful_time_LR.to_csv('pivot_train_categorical_useful_time_LR.csv')
pivot_train_categorical_useful_time_LR = pd.read_csv('pivot_train_categorical_useful_time_LR.csv',index_col = 0)
pivot_train_categorical_useful_time_LR.head()
Explanation: fit mean feature on target
End of explanation
# pivot_train_canal = pd.get_dummies(pivot_train_categorical_useful_train['Canal_ID'])
# pivot_train_categorical_useful_train = pivot_train_categorical_useful_train.join(pivot_train_canal)
# pivot_train_categorical_useful_train.head()
Explanation: add dummy feature
End of explanation
%ls
pre_product = pd.read_csv('preprocessed_products.csv',index_col = 0)
pre_product.head()
pre_product['weight_per_piece'] = pd.to_numeric(pre_product['weight_per_piece'], errors='coerce')
pre_product['weight'] = pd.to_numeric(pre_product['weight'], errors='coerce')
pre_product['pieces'] = pd.to_numeric(pre_product['pieces'], errors='coerce')
pivot_train_categorical_useful_time_LR_weight = pd.merge(pivot_train_categorical_useful_time_LR,
pre_product[['ID','weight','weight_per_piece']],
left_on = 'Producto_ID',right_on = 'ID',how = 'left')
pivot_train_categorical_useful_time_LR_weight.head()
pivot_train_categorical_useful_time_LR_weight = pd.merge(pivot_train_categorical_useful_time_LR,
pre_product[['ID','weight','weight_per_piece']],
left_on = 'Producto_ID',right_on = 'ID',how = 'left')
pivot_train_categorical_useful_time_LR_weight.head()
pivot_train_categorical_useful_time_LR_weight.to_csv('pivot_train_categorical_useful_time_LR_weight.csv')
pivot_train_categorical_useful_time_LR_weight = pd.read_csv('pivot_train_categorical_useful_time_LR_weight.csv',index_col = 0)
pivot_train_categorical_useful_time_LR_weight.head()
Explanation: add product feature
End of explanation
%cd '/media/siyuan/0009E198000CD19B/bimbo/origin'
%ls
cliente_tabla = pd.read_csv('cliente_tabla.csv')
town_state = pd.read_csv('town_state.csv')
town_state['town_id'] = town_state['Town'].str.split()
town_state['town_id'] = town_state['Town'].str.split(expand = True)
train_basic_feature = pivot_train_categorical_useful_time_LR_weight[['Cliente_ID','Producto_ID','Agencia_ID']]
cliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' )
cliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' )
cliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index()
cliente_per_town_count['NombreCliente'] = cliente_per_town_count['NombreCliente']/float(100000)
cliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','Agencia_ID','town_id']],
cliente_per_town_count,on = 'town_id',how = 'left')
pivot_train_categorical_useful_time_LR_weight_town = pd.merge(pivot_train_categorical_useful_time_LR_weight,
cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente']],
on = ['Cliente_ID','Producto_ID'],how = 'left')
cliente_tabla.head()
town_state.head()
town_state['town_id'] = town_state['Town'].str.split()
town_state['town_id'] = town_state['Town'].str.split(expand = True)
town_state.head()
pivot_train_categorical_useful_time_LR_weight.columns.values
train_basic_feature = pivot_train_categorical_useful_time_LR_weight[['Cliente_ID','Producto_ID','Agencia_ID']]
cliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' )
cliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' )
cliente_per_town.head()
cliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index()
cliente_per_town_count['NombreCliente'] = cliente_per_town_count['NombreCliente']/float(100000)
cliente_per_town_count.head()
cliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','Agencia_ID','town_id']],
cliente_per_town_count,on = 'town_id',how = 'left')
cliente_per_town_count_final.head()
pivot_train_categorical_useful_time_LR_weight_town = pd.merge(pivot_train_categorical_useful_time_LR_weight,
cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente']],
on = ['Cliente_ID','Producto_ID'],how = 'left')
pivot_train_categorical_useful_time_LR_weight_town.head()
pivot_train_categorical_useful_time_LR_weight_town.columns.values
Explanation: add town feature
End of explanation
train_pivot_xgb_time1.columns.values
train_pivot_xgb_time1 = train_pivot_xgb_time1.drop(['Cliente_ID','Producto_ID','Agencia_ID',
'Ruta_SAK','Canal_ID'],axis = 1)
pivot_train_categorical_useful_train_time_no_nan = pivot_train_categorical_useful_train[pivot_train_categorical_useful_train['Sem8'].notnull()]
# pivot_train_categorical_useful_train_time_no_nan = pivot_train_categorical_useful_train[pivot_train_categorical_useful_train['Sem9'].notnull()]
pivot_train_categorical_useful_train_time_no_nan_sample = pivot_train_categorical_useful_train_time_no_nan.sample(1000000)
train_feature = pivot_train_categorical_useful_train_time_no_nan_sample.drop(['Sem8','Sem9'],axis = 1)
train_label = pivot_train_categorical_useful_train_time_no_nan_sample[['Sem8','Sem9']]
#seperate train and test data
# datasource: sparse_week_Agencia_Canal_Ruta_normalized_csr label:train_label
%time train_set, valid_set, train_labels, valid_labels = train_test_split(train_feature,\
train_label, test_size=0.10)
# dtrain = xgb.DMatrix(train_feature,label = train_label['Sem8'],missing=NaN)
dtrain = xgb.DMatrix(train_feature,label = train_label['Sem8'],missing=NaN)
param = {'booster':'gbtree',
'nthread': 7,
'max_depth':6,
'eta':0.2,
'silent':0,
'subsample':0.7,
'objective':'reg:linear',
'eval_metric':'rmse',
'colsample_bytree':0.7}
# param = {'eta':0.1, 'eval_metric':'rmse','nthread': 8}
# evallist = [(dvalid,'eval'), (dtrain,'train')]
num_round = 1000
# plst = param.items()
# bst = xgb.train( plst, dtrain, num_round, evallist )
cvresult = xgb.cv(param, dtrain, num_round, nfold=5,show_progress=True,show_stdv=False,
seed = 0, early_stopping_rounds=10)
print(cvresult.tail())
Explanation: begin xgboost training
End of explanation
# xgb.plot_importance(cvresult)
Explanation: for 1 week later
cv rmse 0.451181 with dummy canal, time regr,
cv rmse 0.450972 without dummy canal, time regr,
cv rmse 0.4485676 without dummy canal, time regr, producto info
cv rmse 0.4487434 without dummy canal, time regr, producto info, cliente_per_town
for 2 week later
cv rmse 0.4513236 without dummy canal, time regr, producto info
End of explanation |
8,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import modules
Step1: Enter your details for twitter API
Step2: Set up details for PostGIS DB, run in terminal
Step3: Function which connects to PostGis database and inserts data
Step4: Function to remove the hyperlinks from the text
Step5: Process JSON twitter streamd data
Step6: Main procedure | Python Code:
from twython import TwythonStreamer
import string, json, pprint
import urllib
from datetime import datetime
from datetime import date
from time import *
import string, os, sys, subprocess, time
import psycopg2
import re
from osgeo import ogr
Explanation: Import modules
End of explanation
# get access to the twitter API
APP_KEY = 'fQCYxyQmFDUE6aty0JEhDoZj7'
APP_SECRET = 'ZwVIgnWMpuEEVd1Tlg6TWMuyRwd3k90W3oWyLR2Ek1tnjnRvEG'
OAUTH_TOKEN = '824520596293820419-f4uGwMV6O7PSWUvbPQYGpsz5fMSVMct'
OAUTH_TOKEN_SECRET = '1wq51Im5HQDoSM0Fb5OzAttoP3otToJtRFeltg68B8krh'
Explanation: Enter your details for twitter API
End of explanation
dbname = "demo"
user = "user"
password = "user"
table = "tweets"
Explanation: Set up details for PostGIS DB, run in terminal:
We are going to use a PostGis database, which requires you to have an empty database. Enter these steps into the terminal to set up you databse.
In this example we use "demo" as the name of our database. Feel free to give you database another name, but replace "demo" with the name you have chosen.
Connect to postgres
psql -d postgres"
Create database
postgres=# CREATE DATABASE demo;
Switch to new DB
postgres=# \c demo
Add PostGIS extension to new DB
demo=# create extension postgis;
Add Table
demo=# CREATE TABLE tweets (id serial primary key, tweet_id BIGINT, text varchar(140), date DATE, time TIME, geom geometry(POINT,4326) );
Enter your database connection details:
End of explanation
def insert_into_DB(tweet_id, tweet_text, tweet_date, tweet_time, tweet_lat, tweet_lon):
try:
conn = psycopg2.connect(dbname = dbname, user = user, password = password)
cur = conn.cursor()
# enter stuff in database
sql = "INSERT INTO " + str(table) + " (tweet_id, text, date, time, geom) \
VALUES (" + str(tweet_id) + ", '" + str(tweet_text) + "', '" + str(tweet_date) + "', '" + str(tweet_time) + "', \
ST_GeomFromText('POINT(" + str(tweet_lon) + " " + str(tweet_lat) + ")', 4326))"
cur.execute(sql)
conn.commit()
conn.close()
except psycopg2.DatabaseError, e:
print 'Error %s' % e
Explanation: Function which connects to PostGis database and inserts data
End of explanation
def remove_link(text):
pattern = r'(https://)'
matcher = re.compile(pattern)
match = matcher.search(text)
if match != None:
text = text[:match.start(1)]
return text
Explanation: Function to remove the hyperlinks from the text
End of explanation
#Class to process JSON data comming from the twitter stream API. Extract relevant fields
class MyStreamer(TwythonStreamer):
def on_success(self, data):
tweet_lat = 0.0
tweet_lon = 0.0
tweet_name = ""
retweet_count = 0
if 'id' in data:
tweet_id = data['id']
if 'text' in data:
tweet_text = data['text'].encode('utf-8').replace("'","''").replace(';','')
tweet_text = remove_link(tweet_text)
if 'coordinates' in data:
geo = data['coordinates']
if geo is not None:
latlon = geo['coordinates']
tweet_lon = latlon[0]
tweet_lat = latlon[1]
if 'created_at' in data:
dt = data['created_at']
tweet_datetime = datetime.strptime(dt, '%a %b %d %H:%M:%S +0000 %Y')
tweet_date = str(tweet_datetime)[:11]
tweet_time = str(tweet_datetime)[11:]
if 'user' in data:
users = data['user']
tweet_name = users['screen_name']
if 'retweet_count' in data:
retweet_count = data['retweet_count']
if tweet_lat != 0:
# call function to write to DB
insert_into_DB(tweet_id, tweet_text, tweet_date, tweet_time, tweet_lat, tweet_lon)
def on_error(self, status_code, data):
print "OOPS FOUTJE: " +str(status_code)
#self.disconnect
Explanation: Process JSON twitter streamd data
End of explanation
def main():
try:
stream = MyStreamer(APP_KEY, APP_SECRET,OAUTH_TOKEN, OAUTH_TOKEN_SECRET)
print 'Connecting to twitter: will take a minute'
except ValueError:
print 'OOPS! that hurts, something went wrong while making connection with Twitter: '+str(ValueError)
# Filter based on bounding box see twitter api documentation for more info
try:
stream.statuses.filter(locations='-0.351468, 51.38494, 0.148271, 51.672343')
except ValueError:
print 'OOPS! that hurts, something went wrong while getting the stream from Twitter: '+str(ValueError)
if __name__ == '__main__':
main()
Explanation: Main procedure
End of explanation |
8,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
06 - For JLab Submission
I was invited to send over my trained model for evaluation!
The model needs changing to be compliant with the rules
Submitted models will be
loaded as-is from a single submitted HDF5 compatible with keras.models.load model(). The loaded model
will then be fed a final set of data (TEST) that will conform to the formats outlined above. No post
processing on the model’s output will be performed, meaning the models are expected to provide two
real number outputs such that the first output has been trained to give θ and the second to give z.
Change model to give two outputs, not a single array
Output to HDF5 file compatible with keras.models.load_model()
Step1: Generator based on JLab Starter Code
Step2: Double Regression
Step3: Output the trained model
Step4: Visualizations, Evaluation
Step5: Train the original approach
Has many more hyperparameters | Python Code:
%matplotlib inline
Explanation: 06 - For JLab Submission
I was invited to send over my trained model for evaluation!
The model needs changing to be compliant with the rules
Submitted models will be
loaded as-is from a single submitted HDF5 compatible with keras.models.load model(). The loaded model
will then be fed a final set of data (TEST) that will conform to the formats outlined above. No post
processing on the model’s output will be performed, meaning the models are expected to provide two
real number outputs such that the first output has been trained to give θ and the second to give z.
Change model to give two outputs, not a single array
Output to HDF5 file compatible with keras.models.load_model()
End of explanation
import os
import sys
import gzip
import pandas as pd
import numpy as np
import math
width = 36
height = 100
batch_size = 32
train_df = pd.read_csv('../TRAIN/track_parms.csv')
train_df = train_df.rename(columns={'phi': 'theta'})
valid_df = pd.read_csv('../VALIDATION/track_parms.csv')
valid_df = valid_df.rename(columns={'phi': 'theta'})
STEP_SIZE_TRAIN = len(train_df)/batch_size
STEP_SIZE_VALID = len(valid_df)/batch_size
def generate_arrays_from_file(path, labels_df):
images_path = os.path.join(path, 'images.raw.gz')
print('Generator created for: {}'.format(images_path))
batch_input = []
batch_labels_theta = []
batch_labels_z = []
idx = 0
ibatch = 0
while True:
with gzip.open(images_path) as f:
while True:
# Read in one image
bytes = f.read(width*height)
if len(bytes) != (width*height): break # break into outer loop so we can re-open file
data = np.frombuffer(bytes, dtype='B', count=width*height)
pixels = np.reshape(data, [width, height, 1], order='F')
pixels_norm = np.transpose(pixels.astype(np.float) / 255., axes=(1, 0, 2) )
# Labels
theta = labels_df.theta[idx]
z = labels_df.z[idx]
idx += 1
# Add to batch and check if it is time to yield
batch_input.append( pixels_norm )
batch_labels_theta.append(theta)
batch_labels_z.append( z )
if len(batch_input) == batch_size:
ibatch += 1
# Since we are training multiple loss functions we must
# pass the labels back as a dictionary whose keys match
# the layer their corresponding values are being applied
# to.
labels_dict = {
'theta_output' : np.array(batch_labels_theta),
'z_output' : np.array(batch_labels_z),
}
yield ( np.array(batch_input), labels_dict )
batch_input = []
batch_labels_theta = []
batch_labels_z = []
idx = 0
f.close()
train_generator = generate_arrays_from_file('../TRAIN', train_df)
valid_generator = generate_arrays_from_file('../VALIDATION', valid_df)
Explanation: Generator based on JLab Starter Code:
End of explanation
import tensorflow as tf
def double_regression_model():
image_input = tf.keras.Input(shape=(height, width, 1),
name='image_input')
x = tf.keras.layers.Conv2D(8, (3, 3))(image_input)
x = tf.keras.layers.Activation(tf.nn.relu)(x)
x = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(x)
x = tf.keras.layers.Conv2D(12, (3, 3))(x)
x = tf.keras.layers.Activation(tf.nn.relu)(x)
x = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(x)
#### Theta branch
theta_branch = tf.keras.layers.Conv2D(16, (2, 2))(x)
theta_branch = tf.keras.layers.Activation(tf.nn.relu)(theta_branch)
theta_branch = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(theta_branch)
theta_branch = tf.keras.layers.Conv2D(16, (3, 3))(theta_branch)
theta_branch = tf.keras.layers.Activation(tf.nn.relu)(theta_branch)
theta_branch = tf.keras.layers.Flatten()(theta_branch)
theta_branch = tf.keras.layers.Dense(16)(theta_branch)
theta_branch = tf.keras.layers.Activation(tf.nn.relu)(theta_branch)
theta_branch = tf.keras.layers.Dropout(0.5)(theta_branch)
output_theta = tf.keras.layers.Dense(1, activation='linear',
name='theta_output')(theta_branch)
####
#### Vertex branch
z_branch = tf.keras.layers.Conv2D(16, (2, 2))(x)
z_branch = tf.keras.layers.Activation(tf.nn.relu)(z_branch)
z_branch = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(z_branch)
z_branch = tf.keras.layers.Conv2D(16, (3, 3))(z_branch)
z_branch = tf.keras.layers.Activation(tf.nn.relu)(z_branch)
z_branch = tf.keras.layers.Flatten()(z_branch)
z_branch = tf.keras.layers.Dense(16)(z_branch)
z_branch = tf.keras.layers.Activation(tf.nn.relu)(z_branch)
z_branch = tf.keras.layers.Dropout(0.5)(z_branch)
output_z = tf.keras.layers.Dense(1, activation='linear',
name='z_output')(z_branch)
####
model = tf.keras.Model(inputs=image_input, outputs=[output_theta, output_z])
model.compile(
optimizer='adam',
loss={
'theta_output': 'mean_squared_error',
'z_output': 'mean_squared_error'
},
metrics=['mse']
)
return model
model = double_regression_model()
model.summary()
history = model.fit_generator(
generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=15,
initial_epoch=6,
)
Explanation: Double Regression
End of explanation
model.save('jlab_submission.h5')
model.save_weights('jlab_submission_weights.h5')
Explanation: Output the trained model
End of explanation
valid_generator = generate_arrays_from_file('../VALIDATION', valid_df)
y_pred = model.predict_generator(valid_generator, steps=STEP_SIZE_VALID)
valid_df['theta_pred'] = y_pred[0][:50000]
import matplotlib.pyplot as plt
plt.hist(y_pred[0])
valid_df = valid_df.eval('d_theta = theta - theta_pred')
valid_df.d_theta.hist()
Explanation: Visualizations, Evaluation
End of explanation
def original_double_regression_model():
image_input = tf.keras.Input(shape=(height, width, 1),
name='image_input')
x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(image_input)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(x)
def branch(input_layer, output_name):
y = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(input_layer)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(y)
y = tf.keras.layers.Conv2D(64, (3, 3), activation='relu')(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(y)
y = tf.keras.layers.Flatten()(y)
y = tf.keras.layers.Dense(64, activation='relu')(y)
y = tf.keras.layers.BatchNormalization()(y)
y = tf.keras.layers.Dropout(0.5)(y)
output = tf.keras.layers.Dense(
1, activation='linear',
name=output_name
)(y)
return output
theta_output = branch(x, 'theta_output')
z_output = branch(x, 'z_output')
model = tf.keras.Model(inputs=image_input, outputs=[theta_output, z_output])
model.compile(
optimizer='adam',
loss={
'theta_output': 'mean_squared_error',
'z_output': 'mean_squared_error'
}
)
return model
original_model = original_double_regression_model()
original_model.summary()
train_generator = generate_arrays_from_file('../TRAIN', train_df)
valid_generator = generate_arrays_from_file('../VALIDATION', valid_df)
original_history = original_model.fit_generator(
generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=7,
initial_epoch=5
)
original_history.history.keys()
plt.plot(original_history.history['z_output_loss'], label="Train z-vertex MSE")
plt.plot(original_history.history['val_z_output_loss'], label="Validation z-vertex MSE")
plt.legend()
plt.show()
original_model.save('../models/jlab_submission.h5')
import keras
reloaded_model = tf.keras.models.load_model('../models/jlab_submission.h5')
Explanation: Train the original approach
Has many more hyperparameters
End of explanation |
8,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Deep Learning
Project
Step1: Step 1
Step2: 3. Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include
Step3: Step 2
Step4: 5. Show a sample of the augmented dataset
Step6: 6. Pre-process functions
Step7: 7. Show a sample of the preprocess functions outputs
Step8: 8. Preprocess the Dataset
Step9: 9. Model Architecture
| Layer | Description | Input | Output |
|
Step10: 10. Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
Step11: 11. Features and Labels
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
Step12: 12. Training Pipeline
Create a training pipeline that uses the model to classify German Traffic Sign Benchmarks data.
Step13: 13. Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
Step14: 14. Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
Step15: 15. Evaluate accuracy of the different data sets
Step16: Step 3
Step17: 17. Predict the Sign Type for Each Image
Step18: 18. Analyze Performance
Step19: 19. Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability
Step20: Step 4 | Python Code:
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = './traffic-signs-data/train.p'
validation_file = './traffic-signs-data/valid.p'
testing_file = './traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
assert(len(X_train) == len(y_train))
assert(len(X_valid) == len(y_valid))
assert(len(X_test) == len(y_test))
print("Loading done!")
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.
The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
1. Load The Data
End of explanation
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# Number of training examples
n_train = len(X_train)
# Number of testing examples.
n_test = len(X_test)
# Number of validation examples
n_valid = len(X_valid)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = np.unique(y_train).size
print("Number of training examples =", n_train)
print("Number of validation examples =", n_valid)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.
2. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
End of explanation
import matplotlib.pyplot as plt
import random
import numpy as np
import csv
import pandas as pd
# Visualizations will be shown in the notebook.
%matplotlib inline
def show_sample(features, labels, histogram = 1, sample_num = 1, sample_index = -1, color_map ='brg'):
if histogram == 1 :
col_num = 2
#Create training sample + histogram plot
f, axarr = plt.subplots(sample_num+1, col_num, figsize=(col_num*4,(sample_num+1)*3))
else:
if sample_num <= 4:
col_num = sample_num
else:
col_num = 4
if sample_num%col_num == 0:
row_num = int(sample_num/col_num)
else:
row_num = int(sample_num/col_num)+1
if sample_num == 1:
#Create training sample plot
f, ax = plt.subplots(row_num, col_num)
else:
#Create training sample plot
f, axarr = plt.subplots(row_num, col_num, figsize=(col_num*4,(row_num+1)*2))
signnames = pd.read_csv('signnames.csv')
index = sample_index - 1
for i in range(0, sample_num, 1):
if sample_index < -1:
index = random.randint(0, len(features))
else:
index = index + 1
if histogram == 1 :
image = features[index].squeeze()
axarr[i,0].set_title('%s' % signnames.iloc[labels[index], 1])
axarr[i,0].imshow(image,color_map)
hist,bins = np.histogram(image.flatten(),256, normed =1 )
cdf = hist.cumsum()
cdf_normalized = cdf * hist.max()/ cdf.max()
axarr[i,1].plot(cdf_normalized, color = 'b')
axarr[i,1].hist(image.flatten(),256, normed =1, color = 'r')
axarr[i,1].legend(('cdf','histogram'), loc = 'upper left')
axarr[i,0].axis('off')
axarr[sample_num,0].axis('off')
axarr[sample_num,1].axis('off')
else:
image = features[index].squeeze()
if row_num > 1:
axarr[int(i/col_num),i%col_num].set_title('%s' % signnames.iloc[labels[index], 1])
axarr[int(i/col_num),i%col_num].imshow(image,color_map)
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
elif sample_num == 1:
ax.set_title('%s' % signnames.iloc[labels[index], 1])
ax.imshow(image,color_map)
ax.axis('off')
ax.axis('off')
ax.axis('off')
else:
axarr[i%col_num].set_title('%s' % signnames.iloc[labels[index], 1])
axarr[i%col_num].imshow(image,color_map)
axarr[i%col_num].axis('off')
axarr[i%col_num].axis('off')
axarr[i%col_num].axis('off')
# Tweak spacing to prevent clipping of title labels
f.tight_layout()
plt.show()
def show_training_dataset_histogram(labels_train,labels_valid,labels_test):
fig, ax = plt.subplots(figsize=(15,5))
temp = [labels_train,labels_valid,labels_test]
n_classes = np.unique(y_train).size
# the histogram of the training data
n, bins, patches = ax.hist(temp, n_classes, label=["Train","Valid","Test"])
ax.set_xlabel('Classes')
ax.set_ylabel('Number of occurence')
ax.set_title(r'Histogram of the data sets')
ax.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
plt.show()
show_training_dataset_histogram(y_train,y_valid,y_test)
show_sample(X_train, y_train, sample_num = 6)
Explanation: 3. Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
End of explanation
import cv2
from tqdm import tqdm
from sklearn.utils import shuffle
def random_transform_image(dataset, index):
# Hyperparameters
# Values inspired from Pierre Sermanet and Yann LeCun Paper : Traffic Sign Recognition with Multi-Scale Convolutional Networks
Scale_change_max = 0.1
Translation_max = 2 #pixels
Rotation_max = 15 #degrees
Brightness_max = 0.1
# Generate random transformation values
trans_x = np.random.uniform(-Translation_max,Translation_max)
trans_y = np.random.uniform(-Translation_max,Translation_max)
angle = np.random.uniform(-Rotation_max,Rotation_max)
scale = np.random.uniform(1-Scale_change_max,1+Scale_change_max)
bright = np.random.uniform(-Brightness_max,Brightness_max)
#Brightness
#create white image
white_img = 255*np.ones((32,32,3), np.uint8)
black_img = np.zeros((32,32,3), np.uint8)
if bright >= 0:
img = cv2.addWeighted(dataset[index].squeeze(),1-bright,white_img,bright,0)
else:
img = cv2.addWeighted(dataset[index].squeeze(),bright+1,black_img,bright*-1,0)
# Scale
img = cv2.resize(img,None,fx=scale, fy=scale, interpolation = cv2.INTER_CUBIC)
# Get image shape afeter scaling
rows,cols,chan = img.shape
# Pad with zeroes before rotation if image shape is less than 32*32*3
if rows < 32:
offset = int((32-img.shape[0])/2)
# If shape is an even number
if img.shape[0] %2 == 0:
img = cv2.copyMakeBorder(img,offset,offset,offset,offset,cv2.BORDER_CONSTANT,value=[0,0,0])
else:
img = cv2.copyMakeBorder(img,offset,offset+1,offset+1,offset,cv2.BORDER_CONSTANT,value=[0,0,0])
# Update image shape after padding
rows,cols,chan = img.shape
# Rotate
M = cv2.getRotationMatrix2D((cols/2,rows/2),angle,1)
img = cv2.warpAffine(img,M,(cols,rows))
# Translation
M = np.float32([[1,0,trans_x],[0,1,trans_y]])
img = cv2.warpAffine(img,M,(cols,rows))
# Crop centered if image shape is greater than 32*32*3
if rows > 32:
offset = int((img.shape[0]-32)/2)
img = img[offset: 32 + offset, offset: 32 + offset]
return img
# Parameters
# Max example number per class
num_example_per_class = np.bincount(y_train)
min_example_num = max(num_example_per_class)
for i in range(len(num_example_per_class)):
# Update number of examples by class
num_example_per_class = np.bincount(y_train)
# If the class lacks examples...
if num_example_per_class[i] < min_example_num:
# Locate where pictures of this class are located in the training set..
pictures = np.array(np.where(y_train == i)).T
# Compute the number of pictures to be generated
num_example_to_generate = min_example_num - num_example_per_class[i]
# Compute the number of iteration necessary on the real data
num_iter = int( num_example_to_generate/len(pictures) ) + 1
# Compute the pool of real data necessary to fill the classes
if num_iter == 1 :
num_pictures = num_example_to_generate
else:
num_pictures = len(pictures)
# # Limit the number of iteration to 10
# num_iter = min(num_iter, 10)
# Create empty list
more_X = []
more_y = []
for k in range(num_iter):
# if we are in the last iteration, num_pictures is adjusted to fit the min_example_num
if (k == num_iter - 1) and (num_iter > 1):
num_pictures = min_example_num - num_iter * len(pictures)
# For each pictures of this class, generate 1 more synthetic image
pbar = tqdm(range(num_pictures), desc='Iter {:>2}/{}'.format(i+1, len(num_example_per_class)), unit='examples')
for j in pbar:
# Append the transformed picture
more_X.append(random_transform_image(X_train,pictures[j]))
# Append the class number
more_y.append(i)
# Append the synthetic images to the training set
X_train = np.append(X_train, np.array(more_X), axis=0)
y_train = np.append(y_train, np.array(more_y), axis=0)
print("New training feature shape",X_train.shape)
print("New training label shape",y_train.shape)
print("Data augmentation done!")
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
Neural network architecture (is the network over or underfitting?)
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
4. Augment the Data Set
End of explanation
# Visualization
show_training_dataset_histogram(y_train,y_valid,y_test)
show_sample(X_train, y_train, histogram = 0, sample_num = 8, sample_index = 35000)
Explanation: 5. Show a sample of the augmented dataset
End of explanation
import cv2
from numpy import newaxis
def equalize_Y_histogram(features):
images = []
for image in features:
# Convert RGB to YUV
temp = cv2.cvtColor(image, cv2.COLOR_BGR2YUV);
# Equalize Y histogram in order to get better contrast accross the dataset
temp[:,:,0] = cv2.equalizeHist(temp[:,:,0])
# Convert back YUV to RGB
temp = cv2.cvtColor(temp, cv2.COLOR_YUV2BGR)
images.append(temp)
return np.array(images)
def CLAHE_contrast_normalization(features):
images = []
for image in features:
# create a CLAHE object
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(4,4))
temp = clahe.apply(image)
images.append(temp)
return np.array(images)
def convert_to_grayscale(features):
gray_images = []
for image in features:
# Convert RGB to grayscale
temp = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray_images.append(temp)
return np.array(gray_images)
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
a = 0.1
b = 0.9
image_data_norm = a + ((image_data - np.amin(image_data))*(b-a))/(np.amax(image_data) - np.amin(image_data))
return image_data_norm
Explanation: 6. Pre-process functions
End of explanation
index = 255
X_temp1 = convert_to_grayscale(X_train)
X_temp2 = CLAHE_contrast_normalization(X_temp1)
X_temp3 = normalize_grayscale(X_temp2)
show_sample(X_train, y_train, histogram = 1, sample_num = 1, sample_index = index)
show_sample(X_temp1, y_train, histogram = 1, sample_num = 1, sample_index = index, color_map ='gray')
show_sample(X_temp2, y_train, histogram = 1, sample_num = 1, sample_index = index, color_map ='gray')
print(X_temp2[index])
print(X_temp3[index])
Explanation: 7. Show a sample of the preprocess functions outputs
End of explanation
#Preprocessing pipeline
print('Preprocessing training features...')
X_train = convert_to_grayscale(X_train)
X_train = CLAHE_contrast_normalization(X_train)
X_train = normalize_grayscale(X_train)
X_train = X_train[..., newaxis]
print("Processed shape =", X_train.shape)
print('Preprocessing validation features...')
X_valid = convert_to_grayscale(X_valid)
X_valid = CLAHE_contrast_normalization(X_valid)
X_valid = normalize_grayscale(X_valid)
X_valid = X_valid[..., newaxis]
print("Processed shape =", X_valid.shape)
print('Preprocessing test features...')
X_test = convert_to_grayscale(X_test)
X_test = CLAHE_contrast_normalization(X_test)
X_test = normalize_grayscale(X_test)
X_test = X_test[..., newaxis]
print("Processed shape =", X_test.shape)
# Shuffle the training dataset
X_train, y_train = shuffle(X_train, y_train)
print("Pre-processing done!")
Explanation: 8. Preprocess the Dataset
End of explanation
import tensorflow as tf
from tensorflow.contrib.layers import flatten
def model(x, keep_prob):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Network Parameters
n_classes = 43 # MNIST total classes (0-9 digits)
filter_size = 5
# Store layers weight & bias
weights = {
'wc1' : tf.Variable(tf.truncated_normal([filter_size, filter_size, 1, 100], mean = mu, stddev = sigma)),
'wc2' : tf.Variable(tf.truncated_normal([filter_size, filter_size, 100, 200], mean = mu, stddev = sigma)),
'wfc1': tf.Variable(tf.truncated_normal([9900, 100], mean = mu, stddev = sigma)),
'out' : tf.Variable(tf.truncated_normal([100, n_classes], mean = mu, stddev = sigma))}
biases = {
'bc1' : tf.Variable(tf.zeros([100])),
'bc2' : tf.Variable(tf.zeros([200])),
'bfc1': tf.Variable(tf.zeros([100])),
'out' : tf.Variable(tf.zeros([n_classes]))}
def conv2d(x, W, b, strides=1., padding='SAME'):
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding=padding)
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2, padding='SAME'):
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding=padding)
# Layer 1: Convolution 1 - 32*32*1 to 28*28*100
conv1 = conv2d(x, weights['wc1'], biases['bc1'], padding='VALID')
# Max Pool - 28*28*100 to 14*14*100
conv1 = maxpool2d(conv1, k=2)
# Layer 2: Convolution 2 - 14*14*100 to 10*10*200
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'], padding='VALID')
# Max Pool - 10*10*200 to 5*5*200
conv2 = maxpool2d(conv2, k=2)
#Fork second max pool - 14*14*100 to 7*7*100
conv1 = maxpool2d(conv1, k=2)
#Flatten conv1. Input = 7*7*100, Output = 4900
conv1 = tf.contrib.layers.flatten(conv1)
# Flatten conv2. Input = 5x5x200. Output = 5000.
conv2 = tf.contrib.layers.flatten(conv2)
# Concatenate
flat = tf.concat(1,[conv1,conv2])
# Layer 3 : Fully Connected. Input = 9900. Output = 100.
fc1 = tf.add(tf.matmul(flat, weights['wfc1']), biases['bfc1'])
fc1 = tf.nn.relu(fc1)
fc1 = tf.nn.dropout(fc1, keep_prob)
# Layer 4: Fully Connected. Input = 100. Output = 43.
logits = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return logits
Explanation: 9. Model Architecture
| Layer | Description | Input | Output |
|:-------------:|:---------------------------------------------:|:-----------------:|:---------------------------:|
| Input | 32x32x1 Grayscale image | Image | Convolution 1 |
| Convolution 1 | 1x1 stride, valid padding, outputs 28x28x100 | Input | RELU |
| RELU 1 | | Convolution 1 | Max Pooling 1 |
| Max pooling 1 | 2x2 stride, outputs 14x14x100 | RELU 1 | Convolution 2, Max Pooling 3|
| Convolution 2 | 1x1 stride, valid padding, outputs 10x10x200 | Max pooling 1 | RELU 2 |
| RELU 2 | | Convolution 2 | Max pooling 2 |
| Max pooling 2 | 2x2 stride, outputs 5x5x200 | RELU 2 | Flatten 2 |
| Max pooling 3 | 2x2 stride, outputs 7x7x100 | Max pooling 1 | Flatten 1 |
| Flatten 1 | Input = 7x7x100, Output = 4900 | Max pooling 3 | Concatenate 1 |
| Flatten 2 | Input = 5x5x200, Output = 5000 | Max pooling 2 | Concatenate 1 |
| Concatenate 1 | Input1 = 4900, Input1 = 5000, Output = 9900 | Max pooling 2 and 3 |Fully connected |
| Fully connected | Fully Connected. Input = 9900, Output = 100 | Concatenate 1 | Dropout |
| Dropout | Keep prob = 0.75 | Fully connected | Softmax |
| Softmax | Fully Connected. Input = 100, Output = 43 | Dropout | Probabilities |
End of explanation
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
#Hyperparameters
EPOCHS = 100 #Max EPOCH number, if ever early stopping doesn't kick in
BATCH_SIZE = 256 #Max batch size
rate = 0.001 #Base learning rate
keep_probability = 0.75 #Keep probability for dropout..
max_iter_wo_improvmnt = 3000 #For early stopping
Explanation: 10. Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
End of explanation
#Declare placeholder tensors
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
keep_prob = tf.placeholder(tf.float32)
one_hot_y = tf.one_hot(y, 43)
Explanation: 11. Features and Labels
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
End of explanation
logits = model(x, keep_prob)
probabilities = tf.nn.softmax(logits)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
Explanation: 12. Training Pipeline
Create a training pipeline that uses the model to classify German Traffic Sign Benchmarks data.
End of explanation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
Explanation: 13. Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
End of explanation
from sklearn.utils import shuffle
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
# Max iteration number without improvement
max_interation_num_wo_improv = 1000
print("Training...")
iteration = 0
best_valid_accuracy = 0
best_accuracy_iter = 0
stop = 0
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
iteration = iteration + 1
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: keep_probability})
# After 10 Epochs, for every 200 iterations validation accuracy is checked
if (iteration % 200 == 0 and i > 10):
validation_accuracy = evaluate(X_valid, y_valid)
if validation_accuracy > best_valid_accuracy:
best_valid_accuracy = validation_accuracy
best_accuracy_iter = iteration
saver = tf.train.Saver()
saver.save(sess, './best_model')
print("Improvement found, model saved!")
stop = 0
# Stopping criteria : if not improvement since 1000 iterations stop training
if (iteration - best_accuracy_iter) > max_iter_wo_improvmnt:
print("Stopping criteria met..")
stop = 1
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
if stop == 1:
break
# saver.save(sess, './lenet')
# print("Model saved")
Explanation: 14. Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
End of explanation
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
print("Evaluating..")
train_accuracy = evaluate(X_train, y_train)
print("Train Accuracy = {:.3f}".format(train_accuracy))
valid_accuracy = evaluate(X_valid, y_valid)
print("Valid Accuracy = {:.3f}".format(valid_accuracy))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: 15. Evaluate accuracy of the different data sets
End of explanation
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
test_images = os.listdir('traffic-signs-data/web_found_signs/')
X_web = []
for file in test_images:
image = mpimg.imread('traffic-signs-data/web_found_signs/' + file)
plt.imshow(image)
plt.show()
print("Loaded ", file)
X_web.append(image)
X_web = np.array(X_web)
# Preprocess images
print('Preprocessing features...')
X_web = equalize_Y_histogram(X_web)
X_web = convert_to_grayscale(X_web)
X_web = normalize_grayscale(X_web)
X_web = X_web[..., newaxis]
print("Processed shape =", X_web.shape)
Explanation: Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
16. Load and Show the Images
End of explanation
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
import tensorflow as tf
# hardcoded..
y_web = [9,22,2,18,1,17,4,10,38,4,4,23]
#We have to set the keep probability to 1.0 in the model..
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
logits_web = sess.run(tf.argmax(logits,1), feed_dict={x: X_web, keep_prob: 1.0})
print("Prediction =", logits_web)
# show_sample(X_web, logits_web, histogram = 0, sample_num = len(test_images), sample_index = 0, color_map = 'gray')
#Number of column to show
sample_num = len(test_images)
col_num = 4
if sample_num%col_num == 0:
row_num = int(sample_num/col_num)
else:
row_num = int(sample_num/col_num)+1
#Create training sample plot
f, axarr = plt.subplots(row_num, col_num, figsize=(col_num*4,(row_num+1)*2))
signnames = pd.read_csv('signnames.csv')
for i in range(0, sample_num, 1):
image = X_web[i].squeeze()
if logits_web[i] != y_web[i]:
color_str = 'red'
else:
color_str = 'green'
title_str = 'Predicted : %s \n Real: %s' % (signnames.iloc[logits_web[i], 1],signnames.iloc[y_web[i], 1])
axarr[int(i/col_num),i%col_num].set_title(title_str, color = color_str)
axarr[int(i/col_num),i%col_num].imshow(image,'gray')
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
f.tight_layout()
plt.show()
Explanation: 17. Predict the Sign Type for Each Image
End of explanation
### Calculate the accuracy for these 5 new images.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_web, y_web)
print("Web images Accuracy = {:.3f}".format(test_accuracy))
Explanation: 18. Analyze Performance
End of explanation
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
import matplotlib.gridspec as gridspec
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
softmax_prob = sess.run(tf.nn.top_k(probabilities,k = 5), feed_dict={x: X_web, keep_prob: 1.0})
signnames = pd.read_csv('signnames.csv')
for i in range(len(test_images)):
plt.figure(figsize = (6,2))
gs = gridspec.GridSpec(1, 2,width_ratios=[2,3])
plt.subplot(gs[0])
plt.imshow(X_web[i].squeeze(),cmap="gray")
plt.axis('off')
plt.subplot(gs[1])
plt.barh(6-np.arange(5),softmax_prob[0][i], align='center')
if logits_web[i] != y_web[i]:
color_str = 'red'
else:
color_str = 'green'
for i_label in range(5):
temp_string = "%.1f %% : %s" % (softmax_prob[0][i][i_label]*100, str(signnames.iloc[softmax_prob[1][i][i_label], 1]))
plt.text(softmax_prob[0][i][0]*1.1,6-i_label-.15, temp_string, color = color_str)
plt.show()
Explanation: 19. Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:
```
(5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
End of explanation
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it maybe having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
Explanation: Step 4: Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
End of explanation |
8,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probability tables and the categorical distribution
The following cell illustrates drawing from a categorical distribution with on an alphabet, not necessarly $0\dots K-1$.
Step1: Often we need the opposite of the above process, that is given a list of elements, we need to count the number of occurences of each symbol. The following method creates such a statistics.
Step2: Continuous Multivariate (todo)
Step3: The Multivariate Gaussian distribution
\begin{align}
\mathcal{N}(x; \mu, \Sigma) &= |2\pi \Sigma|^{-1/2} \exp\left( -\frac{1}{2} (x-\mu)^\top \Sigma^{-1} (x-\mu) \right) \
& = \exp\left(-\frac{1}{2} x^\top \Sigma^{-1} x + \mu^\top \Sigma^{-1} x - \frac{1}{2} \mu^\top \Sigma^{-1} \mu -\frac{1}{2}\log \det(2\pi \Sigma) \right) \
\end{align}
Draw a vector $x \in \mathbf{R}^N$ where each element $x_i \sim \mathcal{N}(x; 0, 1)$ for $i = 1\dots N$.
$\newcommand{\E}[1]{\left\langle#1\right\rangle}$
Construct
\begin{align}
y = Ax
\end{align}
The expectation and the variance are obtained by
\begin{align}
\E{y} = \E{Ax} = 0
\end{align}
\begin{align}
\E{y y^\top} = A \E{x x^\top} A^\top = A A^\top
\end{align}
So
\begin{align}
y \sim \mathcal{N}(y; 0, A A^\top)
\end{align}
In two dimensions, a bi-variate Gaussian is conveniently represented by an ellipse. The ellipse shows a contour of equal probability. In particular, if we plot the $3\sigma$ ellipse, $99 \%$ of all the data points should be inside the ellipse.
Step4: When the covariance matrix $\Sigma$ is given, as is typically the case, we need a factorization of
$\Sigma = W W^\top$. The Cholesky factorization is such a factorization. (Another possibility, whilst computationally more costly, is the matrix square root.)
Step5: The numpy function numpy.random.multivariate_normal generates samples from a multivariate Gaussian with the given mean and covariance.
Step6: Evaluation of the multivariate Gaussian density
The log-density of the multivariate Gaussian has the following exponential form
\begin{align}
\log \mathcal{N}(x; \mu, \Sigma) &=
-\frac{1}{2}\log \det(2\pi \Sigma) -\frac{1}{2} (x-\mu)^\top \Sigma^{-1} (x - \mu)
\end{align}
It is tempting to implement these expression as written -- indeed it is useful to do so for debugging purposes. However, this direct method is both inefficient and numerically not very stable. This will be a problem when the dimension of $x$ is high. A direct implementation might be as follows
Step7: The evaluation seemingly requires the following steps
Step8: For the solution of $\Sigma u = b$ where $\Sigma = WW^\top$, we use the implementation in scipy.linalg.cho_solve.
Step9: Dirichlet Distribution
The Dirichlet distribution is a distribution over probability vectors.
$$
\mathcal{D}(w_{1
Step10: Wishart
https
Step11: Inverse Wishart
https
Step12: Evaluating the Wishart density
Step13: Some useful functions from np.random
We can get a specific random number state and generate data from it. | Python Code:
import numpy as np
# Sampling from a Categorical Distribution
a = np.array(sorted(['blue', 'red', 'black', 'yellow']))
pr = np.array([0.2, 0.55, 0.15, 0.1])
N = 100
x = np.random.choice(a, size=N, replace=True, p=pr)
print('Symbols:')
print(a)
print('Probabilities:')
print(pr)
print('{N} realizations:'.format(N=N))
print(x)
Explanation: Probability tables and the categorical distribution
The following cell illustrates drawing from a categorical distribution with on an alphabet, not necessarly $0\dots K-1$.
End of explanation
import collections
c = collections.Counter(x)
print(c.most_common())
counts = [e[1] for e in c.most_common()]
symbols = [e[0] for e in c.most_common()]
print('Sorted according to counts')
print(counts)
print(symbols)
# If we require the symbols in sorted order with respect to symbol names, use:
counts = [e[1] for e in sorted(c.most_common())]
symbols = [e[0] for e in sorted(c.most_common())]
print('Sorted according to symbols')
print(counts)
print(symbols)
Explanation: Often we need the opposite of the above process, that is given a list of elements, we need to count the number of occurences of each symbol. The following method creates such a statistics.
End of explanation
from IPython.display import display, Math, Latex, HTML
import notes_utilities as nut
#import imp
#imp.reload(nut)
print('MV Gaussian')
L = nut.pdf2latex_mvnormal(x=r'z', m=r'\mu',v=r'\Sigma')
#L = nut.pdf2latex_mvnormal(x=r's', m=0,v=r'I')
display(HTML(nut.eqs2html_table(L)))
Explanation: Continuous Multivariate (todo)
End of explanation
%matplotlib inline
def ellipse_line(A, mu, col='b'):
'''
Creates an ellipse from short line segments y = A x + \mu
where x is on the unit circle.
'''
N = 18
th = np.arange(0, 2*np.pi+np.pi/N, np.pi/N)
X = np.array([np.cos(th),np.sin(th)])
Y = np.dot(A, X)
ln = plt.Line2D(mu[0]+Y[0,:],mu[1]+Y[1,:],markeredgecolor='k', linewidth=1, color=col)
return ln
N = 100
A = np.random.randn(2,2)
mu = np.zeros(2)
X = np.random.randn(2,N)
Y = np.dot(A,X)
plt.cla()
plt.axis('equal')
ax = plt.gca()
ax.set_xlim(-8,8)
ax.set_ylim(-8,8)
col = 'b'
ln = ellipse_line(3*A, mu, col)
ax.add_line(ln)
#plt.hold(True)
plt.plot(mu[0]+Y[0,:],mu[1]+Y[1,:],'.'+col)
plt.show()
np.dot(A,A.T)
Explanation: The Multivariate Gaussian distribution
\begin{align}
\mathcal{N}(x; \mu, \Sigma) &= |2\pi \Sigma|^{-1/2} \exp\left( -\frac{1}{2} (x-\mu)^\top \Sigma^{-1} (x-\mu) \right) \
& = \exp\left(-\frac{1}{2} x^\top \Sigma^{-1} x + \mu^\top \Sigma^{-1} x - \frac{1}{2} \mu^\top \Sigma^{-1} \mu -\frac{1}{2}\log \det(2\pi \Sigma) \right) \
\end{align}
Draw a vector $x \in \mathbf{R}^N$ where each element $x_i \sim \mathcal{N}(x; 0, 1)$ for $i = 1\dots N$.
$\newcommand{\E}[1]{\left\langle#1\right\rangle}$
Construct
\begin{align}
y = Ax
\end{align}
The expectation and the variance are obtained by
\begin{align}
\E{y} = \E{Ax} = 0
\end{align}
\begin{align}
\E{y y^\top} = A \E{x x^\top} A^\top = A A^\top
\end{align}
So
\begin{align}
y \sim \mathcal{N}(y; 0, A A^\top)
\end{align}
In two dimensions, a bi-variate Gaussian is conveniently represented by an ellipse. The ellipse shows a contour of equal probability. In particular, if we plot the $3\sigma$ ellipse, $99 \%$ of all the data points should be inside the ellipse.
End of explanation
Sigma = np.dot(A, A.T)
W = np.linalg.cholesky(Sigma)
X = np.random.randn(2,N)
Y = np.dot(W,X)
plt.cla()
plt.axis('equal')
ax = plt.gca()
ax.set_xlim(-8,8)
ax.set_ylim(-8,8)
col = 'b'
ln = ellipse_line(3*W, mu, col)
ax.add_line(ln)
#plt.hold(True)
plt.plot(mu[0]+Y[0,:],mu[1]+Y[1,:],'.'+col)
plt.show()
Explanation: When the covariance matrix $\Sigma$ is given, as is typically the case, we need a factorization of
$\Sigma = W W^\top$. The Cholesky factorization is such a factorization. (Another possibility, whilst computationally more costly, is the matrix square root.)
End of explanation
N = 100
Sig = np.dot(A, A.T)
x = np.random.multivariate_normal(mu, Sig, size=N)
plt.cla()
plt.axis('equal')
ax = plt.gca()
ax.set_xlim(-8,8)
ax.set_ylim(-8,8)
plt.plot(x[:,0], x[:,1], 'b.')
ln = ellipse_line(3*A,mu,'b')
plt.gca().add_line(ln)
plt.show()
Explanation: The numpy function numpy.random.multivariate_normal generates samples from a multivariate Gaussian with the given mean and covariance.
End of explanation
def log_mvar_gaussian_inefficient(x, mu, Sig):
return -0.5*np.log(np.linalg.det(2*np.pi*Sig)) - 0.5*np.sum((x-mu)*np.dot(np.linalg.inv(Sig), x-mu),axis=0)
Explanation: Evaluation of the multivariate Gaussian density
The log-density of the multivariate Gaussian has the following exponential form
\begin{align}
\log \mathcal{N}(x; \mu, \Sigma) &=
-\frac{1}{2}\log \det(2\pi \Sigma) -\frac{1}{2} (x-\mu)^\top \Sigma^{-1} (x - \mu)
\end{align}
It is tempting to implement these expression as written -- indeed it is useful to do so for debugging purposes. However, this direct method is both inefficient and numerically not very stable. This will be a problem when the dimension of $x$ is high. A direct implementation might be as follows:
End of explanation
import scipy as sc
import scipy.linalg as la
def log_mvar_gaussian_pdf(x, mu, Sig):
W = np.linalg.cholesky(Sig)
z = -np.sum(np.log(2*np.pi)/2 + np.log(np.diag(W))) - 0.5* np.sum((x-mu)*la.cho_solve((W,True), x-mu),axis=0)
return z
# Dimension of the problem
N = 2
# Generate K points to evaluate the density at
K = 10
x = np.random.randn(N,K)
# Generate random parameters
mu = np.random.randn(N,1)
R = np.random.randn(N,N)
Sig = np.dot(R, R.T)
z1 = log_mvar_gaussian_pdf(x, mu, Sig)
z2 = log_mvar_gaussian_inefficient(x, mu, Sig)
print(z1)
print(z2)
Explanation: The evaluation seemingly requires the following steps:
Evaluation of the log of the determinant of the covariance matrix $\Sigma$
Inversion of the covariance matrix $\Sigma$
Evaluation of the quadratic form $(x-\mu)^\top \Sigma^{-1} (x - \mu)$
A more efficient implementation uses the following observations:
- The covariance matrix $\Sigma$ is positive semidefinite and has a Cholesky factorization
\begin{align}
\Sigma = W W^\top
\end{align}
where $W$ is a lower triangular matrix
- The determinant satisfies the following identity
\begin{align}
\det(\Sigma) & = \det(W W^\top) = \det(W) \det(W^\top) = \det(W)^2
\end{align}
The determinant of the triangular matrix $W$ is simply the product of its diagonal elements $W_{i,i}$ so
\begin{align}
\log \det(\Sigma) & = 2 \log \det(W) & = 2 \sum_i \log W_{i,i}
\end{align}
The quadratic form can be evaluated by the inner product $(x - \mu)^\top u$ where $u = \Sigma^{-1} (x - \mu)$.
Finding $u$ is equivalent to the solution of the linear system
$$
\Sigma u = (x - \mu)
$$
and the solution is equivalent to
$$
u = (W^\top)^{-1}W^{-1} (x - \mu)
$$
and can be solved efficiently by backsubstitution as $W$ is triangular.
This can be implemented as follows
End of explanation
import scipy.linalg as la
N = 2
# Construct a random positive definite matrix
R = np.random.randn(N,N)
Sig = np.dot(R, R.T)
b = np.random.randn(N,1)
# Direct implementation -- inefficient
u_direct = np.matrix(np.linalg.inv(Sig))*b
# Efficient implementation
W = np.linalg.cholesky(Sig)
u_efficient = la.cho_solve((W,True), b)
# Verify that both give the same result
print(u_direct)
print(u_efficient)
from notes_utilities import matrix_inv_lemma, matrix_inv_blocks
display(Latex(matrix_inv_lemma('A', 'B', 'S', 'D', alpha='-', paren=False, inverse=False)))
display(Latex(matrix_inv_lemma('A', 'B', 'S', 'D', alpha='+', paren=False, inverse=False)))
display(Latex(matrix_inv_lemma('A', 'B', 'S', 'D', alpha='-', paren=True, inverse=False)))
display(Latex(matrix_inv_lemma('A', 'B', 'S', 'D', alpha='-', paren=False, inverse=True)))
display(Latex(matrix_inv_lemma(r'\Sigma_{11}', r'\Sigma_{12}', r'{\Sigma_{22}^{-1}}', r'\Sigma_{12}^\top', alpha='-', paren=False)))
temp = matrix_inv_blocks(r'\Sigma_{11}', r'\Sigma_{12}', r'\Sigma_{22}', r'\Sigma_{12}^\top', paren1=False, paren2=False)
display(Latex(temp))
print(temp)
Explanation: For the solution of $\Sigma u = b$ where $\Sigma = WW^\top$, we use the implementation in scipy.linalg.cho_solve.
End of explanation
from IPython.display import display, Math, Latex, HTML
import notes_utilities as nut
from importlib import reload
reload(nut)
Latex('$\DeclareMathOperator{\trace}{Tr}$')
L = nut.pdf2latex_dirichlet(x=r'h', N=r'J', i='j',a=r'\alpha')
display(HTML(nut.eqs2html_table(L)))
import numpy as np
np.random.dirichlet(3*np.array([1,2,3]))
Explanation: Dirichlet Distribution
The Dirichlet distribution is a distribution over probability vectors.
$$
\mathcal{D}(w_{1:N}; \alpha_{1:N}) = \frac{\Gamma(\sum_i \alpha_i)}{\prod_i \Gamma(\alpha_i) } \prod w_i^{\alpha_i-1}
$$
End of explanation
from IPython.display import display, Math, Latex, HTML
import notes_utilities as nut
#import imp
#imp.reload(nut)
print('MV Gaussian')
L = nut.pdf2latex_mvnormal(x=r's', m=r'\mu',v=r'\Sigma')
#L = nut.pdf2latex_mvnormal(x=r's', m=0,v=r'I')
display(HTML(nut.eqs2html_table(L)))
print('Wishart')
L = nut.pdf2latex_wishart(X=r'X', nu=r'\nu', S=r'S', k=r'k', W='W')
display(HTML(nut.eqs2html_table(L)))
print('Inverse Wishart')
L = nut.pdf2latex_invwishart(X=r'\Sigma', nu=r'\nu', S=r'S', k=r'k', IW='IW')
#L = nut.pdf2latex_mvnormal(x=r's', m=0,v=r'I')
display(HTML(nut.eqs2html_table(L)))
Explanation: Wishart
https://en.wikipedia.org/wiki/Wishart_distribution
$\newcommand{\trace}{\mathop{\text{Tr}}}$
The Wishart distribution appears as the posterior of the precision matrix of a multivariate Gaussian.
A positive semidefinite matrix $X \in \mathbb{R}^{k \times k}$ is said to have a Wishart density
if
\begin{eqnarray}
{\cal W}(X; \nu, S) & = & \exp\left( \frac{\nu - k - 1}2 \log |X| - \frac{1}2\trace S^{-1}X - \frac{\nu}{2} \log |S| - \log Z \right) \
Z & = & 2^{\nu k /2} \pi^{k(k-1)/4} \prod_{i=1}^k \Gamma(\frac{\nu + 1 - i}2) \
& = & 2^{\nu k /2} \Gamma_k(\nu/2)
\end{eqnarray}
The link to Gamma ${\cal G}$ distribution is established by
\begin{eqnarray}
{\cal W}(X; \nu, S) & = & \exp\left( \frac{\nu - k - 1}2 \log |X| - \trace (2 S)^{-1}X - \frac{\nu}{2} \log |2 S | - \log \Gamma_k(\nu/2) \right)
\end{eqnarray}
Alternative parametrization with $a = \nu/2$ and $B^{-1} = 2 S$.
\begin{eqnarray}
{\cal W}(X; 2a, B^{-1}/2) & = & \exp( {(a - (k + 1)/2)} \log |X| &- \trace B X &- \log \Gamma_k(a) &+ a \log |B| ) \
\mathcal{G}(x; a, b) & = & \exp(({a} - 1)\log x &- {b}{x} &- \log \Gamma({a}) &+ {a} \log {b})
\end{eqnarray}
$\newcommand{\E}[1]{\langle#1\rangle}$
\begin{eqnarray}
\E{X} & = & \nu S = a B^{-1}
\end{eqnarray}
End of explanation
# Generate Wishart random variables
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import scipy as sc
import pandas as pd
from scipy.stats import wishart, invwishart
from notes_utilities import pnorm_ball_line
k = 2
#S = np.eye(k)
#S = np.diag([2,3])
nu = 8
S = np.array([[4,-1.9],[-1.9,1.3]])/nu
plt.figure(figsize=(5,5))
ax = plt.gca()
N = 10
for i in range(N):
W = wishart.rvs(nu, S, random_state=None)
ln = pnorm_ball_line(np.linalg.cholesky(W),color='b',linewidth=1)
ax.add_line(ln)
ln = pnorm_ball_line(np.linalg.cholesky(nu*S),color='r')
ax.add_line(ln)
Lim = 5
ax.set_xlim([-Lim,Lim])
ax.set_ylim([-Lim,Lim])
plt.show()
Explanation: Inverse Wishart
https://en.wikipedia.org/wiki/Inverse-Wishart_distribution
The inverse Wishart is a matrix variate distribution and appears as the posterior distribution of a covariance matrix of a Gaussian density. More precisely, it appears as the distribution of $U U^T$ where each column of the $k \times \nu$ matrix $U$ is distributed independently according to $u_i \sim \mathcal{N}(U; 0, S)$ for $i=1\dots \nu$.
The inverse of a Wihart variate matrix is said to be inverse Wishart distributed
\begin{eqnarray}
{\cal IW}(X; \nu, S) & = & \exp\left( - \frac{\nu + k + 1}2 \log |X| - \frac{1}2\trace S X^{-1} + \frac{\nu}{2}\log |S| - \log Z \right) \
Z & = & 2^{\nu k /2} \pi^{k(k-1)/4} \prod_{i=1}^k \Gamma(\frac{\nu + 1 - i}2) \
& = & 2^{\nu k /2} \Gamma_k(\nu/2)
\end{eqnarray}
The link to Inverse Gamma ${\cal IG}$ distribution is established by
\begin{eqnarray}
{\cal IW}(X; \nu, S) & = & \exp\left( - \frac{\nu + k + 1}2 \log |X| - \trace (S/2) X^{-1} + \frac{\nu}{2}\log |S/2| - \log\Gamma_k(\nu/2) \right)
\end{eqnarray}
Alternative with $a = \nu/2$ and $B = S/2$.
\begin{eqnarray}
{\cal IW}(X; 2a, 2B) & = & \exp( - (a + (k+1)/2) \log |X| &- \trace BX^{-1} &- \log\Gamma_k(a) &+ a\log |B|) \
{\cal IG}(x; a, b) & = & \exp(-({a} + 1)\log x &- \frac{{b}}{{x}} &- \log\Gamma({a}) &+ {a} \log {b})
\end{eqnarray}
\begin{eqnarray}
\E{X} & = & S/(\nu - k - 1) = B/(a - (k+1)/2)
\end{eqnarray}
Aside: The multigamma function
https://en.wikipedia.org/wiki/Multivariate_gamma_function
The multigamma function is the result of the integral
$$
\Gamma_k(\alpha) = \int_{W \succ 0} e^{-tr(W)} |W|^{\alpha - (k+1)/2} dW
$$
where the $W$ is a positive definite $k\times k$ matrix (denoted as $W \succ 0$). The scalar parameter $\alpha$ is positive and
$\alpha > (k+1)/2$. It turns out that this integral can be evaluated as
$$
\Gamma_k(\alpha) = \pi^{k(k-1)/4} \prod_{i=1}^{k} \Gamma(\alpha - (i-1)/2)
$$
End of explanation
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import scipy as sc
import pandas as pd
from scipy.stats import wishart, invwishart
from scipy.special import multigammaln
from notes_utilities import pnorm_ball_line
k = 2
nu = 8
S = np.array([[4,-1.9],[-1.9,1.3]])/nu
X = np.random.randn(k,k)
X = X.T.dot(X)
cX = np.linalg.cholesky(X)
logdetX = 2*np.sum(np.log(np.diag(cX)))
c2S = np.linalg.cholesky(2*S)
logdet2S = 2*np.sum(np.log(np.diag(c2S)))
cS2 = c2S/2.0 # np.linalg.cholesky(S/2)
logdetS2 = logdet2S - 4*np.log(2) # 2*np.sum(np.log(np.diag(cS2)))
logpdf_wishart = (nu - k - 1)/2.*logdetX - np.trace(np.linalg.solve(2*S, X)) - nu/2*logdet2S - multigammaln(nu/2, k)
logpdf_invwishart = -(nu + k + 1)/2.*logdetX - np.trace(np.linalg.solve(X, S/2)) + nu/2*logdetS2 - multigammaln(nu/2, k)
print(logpdf_wishart)
print(wishart.logpdf(X, nu, S))
print(logpdf_invwishart)
print(invwishart.logpdf(X, nu, S))
#\exp\left( \frac{\nu - k - 1}2 \log |X| - \trace (2 S)^{-1}X - \frac{\nu}{2} \log |2 S | - \log \Gamma_k(\nu/2) \right)
# Generate Inverse Wishart random variables
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import scipy as sc
import pandas as pd
from scipy.stats import wishart, invwishart
from notes_utilities import pnorm_ball_line
k = 2
#S = np.eye(k)
#S = np.diag([2,3])
nu = 16
S = np.array([[4,-1.9],[-1.9,1.3]])*nu
plt.figure(figsize=(5,5))
ax = plt.gca()
N = 10
for i in range(N):
IW = invwishart.rvs(nu, S, random_state=None)
ln = pnorm_ball_line(np.linalg.cholesky(IW),color='b',linewidth=1)
ax.add_line(ln)
ln = pnorm_ball_line(np.linalg.cholesky(S/(nu-k-1)),color='r')
ax.add_line(ln)
Lim = 5
ax.set_xlim([-Lim,Lim])
ax.set_ylim([-Lim,Lim])
plt.show()
import inspect
import scipy as sc
from scipy.stats import wishart
#print(inspect.getsource(sc.stats._multivariate.wishart_gen))
print(inspect.getsource(sc.special.multigammaln))
#sps.special.multigammaln
%connect_info
Explanation: Evaluating the Wishart density
End of explanation
import numpy as np
u = np.random.RandomState()
print(u.permutation(10))
lam = 3;
print(u.exponential(lam))
Explanation: Some useful functions from np.random
We can get a specific random number state and generate data from it.
End of explanation |
8,394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DataFrame
Step1: This is to get a tree from test data called cernstaff.root
Step2: Here we create the DataFrame object
Step3: As you can see, it also creates a PyTreeReader. This is why PyTreeReader is mandatory for the class
Traditional read without cache
Step4: Same but now first caching it and then rerunning the same
Step5: Now rerunning it and using the cached results to print
Step6: There is some caching with the files in the Swan service, but the point is that first and second run differ alot with their speed
Lets reset the cache by calling a function from the class
Step7: Now we can demonstrate different histograms and drawing them
Step8: Rerun the same analysis, compare the time
Step9: Lets add one more filter after the cache and see how it differs... | Python Code:
import ROOT
from PyTreeReader import PyTreeReader
from functional import DataFrame
from ROOT import TFile
Explanation: DataFrame: Functional Chains for TTrees in Python.
<hr style="border-top-width: 4px; border-top-color: #359C38;">
The DataFrame class brings the feature called functional chains with caching to trees. This is achieved in identifying different functions are creating lists of transformations.
Usability is a key. Functional chains are a lot simpler way of creating histograms because the user doesn't need to create loops. DataFrame will do it for you.
Preparation
We include ROOT, DataFrame and PyTreeReader class. DataFrame uses PyTreeReader for filling histograms and filtering results. All of the computing is mostly done by using PyTreeReader inside the DataFrame Class. Clearly this will be done in a better way now that the usage of PyTreeReader in ROOT is still unknown. PyTreeReader can be found from https://github.com/dpiparo/pytreereader
End of explanation
testFile = TFile('cernstaff.root')
testTree = testFile.Get('T')
Explanation: This is to get a tree from test data called cernstaff.root
End of explanation
dataFrame = DataFrame(testTree)
Explanation: Here we create the DataFrame object
End of explanation
%%time
dataFrame.filter(lambda e : e.Children() > 4).head(5)
Explanation: As you can see, it also creates a PyTreeReader. This is why PyTreeReader is mandatory for the class
Traditional read without cache
End of explanation
dataFrame.resetcache()
%%time
dataFrame.filter(lambda e : e.Children() > 4).cache().head(5)
Explanation: Same but now first caching it and then rerunning the same
End of explanation
%%time
dataFrame.filter(lambda e : e.Children() > 4).cache().filter(lambda e : e.Age() < 47).head(5)
Explanation: Now rerunning it and using the cached results to print
End of explanation
dataFrame.resetcache()
Explanation: There is some caching with the files in the Swan service, but the point is that first and second run differ alot with their speed
Lets reset the cache by calling a function from the class
End of explanation
%%time
dataFrame.filter(lambda e : e.Age() > 45).cache().histo('Age:Cost').Draw('COLZ')
ROOT.gPad.Draw()
Explanation: Now we can demonstrate different histograms and drawing them
End of explanation
%%time
dataFrame.filter(lambda e : e.Age() > 45).cache().histo('Age:Cost').Draw('COLZ')
ROOT.gPad.Draw()
Explanation: Rerun the same analysis, compare the time
End of explanation
%%time
dataFrame.filter(lambda e : e.Age() > 45).cache().filter(lambda e: e.Cost() > 8500).histo('Age:Cost').Draw('COLZ')
ROOT.gPad.Draw()
Explanation: Lets add one more filter after the cache and see how it differs...
End of explanation |
8,395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Meta Analysis of the Datasets for the Epi² pilot project
RNA PTM DATASETS
PYTHON 3 Notebook
Adrien Leger / EMBL EBI
Starting date 23/05/2016
Import general package and definition of specific functions
Step1: Overview of the datasets
I collected datasets from different sources. The 2 big database containing Inosine edition RADAR and DARNED as well as all the datasets cited in the recent review about lncRNA and epitranscriptomics from Shaffik et al. In addition I also have 2 recent datasets for m1A and m6A/Am6A. All the datasets need to be carefully reviewed, reformated to BED6 format, converted to hg38 reference genome and reannotated with recent genecode annotations. See overview of the datasets in the table below
|Modification|Article|Initial number of peaks found (Shafik et al)|Final number of peaks found|Number of peaks in lncRNA found (Shafik et al)|Number of uniq lncRNA (Shafik et al)|
|---|---|---|---|---|---|
|Inosine|Z Peng et al. Nat Biotechnol 30, 253-60 (2012)|22686 (22686)|21,111 |3382 (4425)|505 (846)|
|Inosine|M Sakurai et al. Genome Res 24, 522-34 (2014)|20482 (20482)|20,482 |2550 (3050)|319 (400)|
|m5C|S Hussain et al. Cell Rep 4, 255-61 (2013)|1084 (1084)|1,084 |107 (110)|39 (41)|
|m5C|V Khoddami et al. Nat Biotechnol 31, 458-64 (2013)|20553 (20533)|20,553 |1523 (1580)|39 (38)|
|m5C|JE Squires et al. Nucleic Acids Res 40, 5023-33 (2012)|10490 (10490)|10,490 |281 (1544)|112 (711)|
|m6A|D Dominissini et al. Nature 485, 201-6 (2012)|25918 (25776)|2,894 |115 (7397)|84 (6165)|
|m6A|KD Meyer et al. Cell 149, 1635-46 (2012)|4341 (4341)|4,341 |48 (57)|16 (20)|
|m6A and m6Am|B Linder et al. Nat Methods 12, 767-72 (2015)|15167 (NA)|15,167 |385 (NA)|168 (NA)|
|m1A|D Dominissini et al. Nature 530, 441-6 (2016)|32136 (NA)|19552 (HeLa
Step2: DARNED is a little messy and hard to convert since the position with the same PMID/OR sample types where fused in the same Site. It makes it difficult to parse. I think it would be better if I duplicate the site with the several PMID and sample type. I just need to verify if the number of fields in PMID and cell type is similar and if they correspond to each other, ie first in cell type = first in PMID
Step3: I reformated and filtered the database with reformat_table to be BED6 compatible. I removed the fields lacking either chromosome, position, tissue, and PID as well as with unknown strand. In addition I also selected only A>I and A>G transitions (same thing). This filtering step eliminated 47965 sites
Step4: I tried to see if the PMID and the tissue field always had the same lengh, so I could de multiplex the fused positions. The answer is no, the number of PMID and tissues could be different. However 272551 positions have only 1 tissue and 1 PMID. These are maybe not the more relialable positions but they might be more easy to interprete. The sites with only 1 PMID but several tissues can also be used. Same think for several PMID, 1 tissue. I will demultiplex them so as to have only 1 PMID and 1 tissue by site. Concerning the site with more than 1 PMID and 1 tissue, I will extract then in another backup file.
Step5: I only lost 4270 sites with several PMID and several tissue. Some sites where demultiplexed and I now have 290018 sites with 1 PMID and 1 tissue
Convert DARNED coordinates from hg19 to hg39 with Crossmap
Coordinate conversion using CrossMap and a hg19 tp hg38 chain file in BASH
Step6: The conversion resulted in the lost of 16 sites, which is neglectable compared with the 290002 sites in the database
RADAR
Reformat RADAR Database
The file dowloaded from DARNED is not a bed file. I need to modify it to comply with the standard format. The format of the mane file Human_AG_all_hg19_v2.txt is the following
Step7: I am not interested by the conservation fields, the annotation and the repetitive nature, but I will keep chromosome, position, gene and strand.
Additional information is also available in a secondary database file that I found hidden on RADAR website. The information includes the original publication and the source of the biological sample. The information is only available for around half of the sites from 4 publications. I will only retain these richly annotated sites. The same sites can have bmultiple entries in the secondary file since I could have been reported by several papers in several tissues. I will keep a line for each independantly discovered site. The coverage and editing level could be interesting to save too.
Step8: The operation is quite complex since I will have to fuse the 2 files and extract only specific values. I need to code a specific parser. The main RADAR file will be parsed and organised as a simple embed dict. The secondary file will be the more important since I will start parsing from it to find the complementary information in the main database file. Each site will be added to a list of Site objects, that will be subsequently iterated to combine with the main database before writing in a new Bed formated file.
Step9: Read the original file, reformat the field and write a new file BED6 compliant.
chrom chromstart chromend name score orient
chr4 774138 774138 A>I|LOC100129917|LUNG
Step10: After combining information, out of the 1343463 sites in the RADAR secondary file and 2576460 in the RADAR primary file, 1342423 consistant sites were found in both files, ie half of the database site were filtered out because they where not in the 2 database files.
Convert RADAR coordinates from hg19 to hg39 with Crossmap
Coordinate conversion using CrossMap in BASH
Step11: Around 10000 additional sites were lost during the conversion from hg19 to hg38
Shafik et al datasets
File list and define basic exploratory functions
The datasets a pretty cleaned and already converted to hg38 build coordinates. However, they are not strandardize and the synthax could be different since the bioinformatician in charge of the analysis tried to keep as much information as possible from the original datasets. I will strandardize the file names, the bed name fields and add reformat the file header. In addition I will explore the datasets to see if the are consistant and decide if I use them or not.
Step12: editing_Peng_hg38
Step13: The column 9 contains more than just Inosine transitions(A>G) but also all the other editing sites they found. Here I will focuss on the Inosine only. I need to filter out all the other values
Step14: Contains a lot of fields, some of which I don't even have an idea of what the contains. The dataset was not filtered and contains not only A>G A>I transitions. There is a total of 22686 sites but only 21111 are A>G transitions. I filtered out all the other modifications and retained only the A>G transition.
I am not sure that I should use this dataset, especially since it is already included in the RADAR database.
editing_Sakurai_hg38
Step15: No problem with this dataset, I kept the gene loci name for future comparison after reannotation with GENCODE
m5C_Hussain_hg38
Step16: No problem with this dataset, Since there is no gene loci, I just filed the field with a dash to indicate that it is empty
m5C_Khoddami_hg38
Step17: No problem with this dataset. It seems to be focussed on tRNA gene that are clearly over-represented in the gene list
m5C_Squires_hg38
Step18: 1 nt wide peaks = No problem with this dataset. Since there are no gene loci, I just filed the field with a dash to indicate that it is empty
m6A_Dominissini_hg38
Step19: Apparently there is a problem in the data since some of the peaks can be up to 3 500 000 with is much longer than initially described in the paper. I dowloaded the original peak calling file from the sup data of the paper to compare with this datafile.
Step20: The same problem is also found in the original data, though the values only goes up to 2 500 000... I think that long peaks were improperly called... Looking at the data in detail we can see that most of the peaks are found in the 1 to 130 range. There is also a second smaller peak around 1000. Looking at the original article the mapping was done on the human transcriptome and not the genome. Apparently the coordinates were converted to the genome after and that might explain this decrepancy. I have 2 options => Starting from scratch with a recent genome build, but the dataset seems quite tricky and I am not sure I could do it as well as the original authors. The second option is to be retrictive and keep only the small peaks ie > 1000 pb. I think that I will start by this alternative and go back to the data again if needed. To be sure of the quality of the data I will start from the original data and do the liftover conversion myself.
Step21: The filtering based on the peak length size remove a lot of peaks = nearly 90 % of the dataset
Step22: It is much better but I only have 2894 sites out of the 25000 initially in the dataset. For this first exploratory study it should be OK, but I will probably have to go back to the original data again later
m6A_Meyer_hg38
Step23: The original dataset was OK and clean. Contrary to the previous dataset the width of the peaks are between 100 and 220 pb wich is clearly better. It is however interesting to notice that the peak lengths are not randomly distributed
miCLIP_m6A_Linder2015_hg38
Step24: The name field is unusable since it contains a random number of fieds... I cannot parse it easily. That is not a big problem since I will reannotate the data based on the last gencode annotation realease, I did not save any of the informations contained in the original name field. With miCLIP data the peak are 1 nt wide
MeRIPseq_m1A_Dominissini2016_hg38
Step25: I don't know understand some of the categories for HEPG2 cells
* Hela = No ambiguity
* HEK293 = No ambiguity
* HEPG2_common_mRNA = Unclear, but aparently looking at the data this is the general RNA dataset (including ncRNA) after peak calling in untreated HEPG2 cells
* HEPG2_heat_shock_4h = No ambiguity
* HEPG2_Glucose_starv_4h = No ambiguity
* HEPG2_common_total_RNA = Unclear, but it seems to be commons peaks shared by all the cell types.
Step26: pseudoU_Carlile_hg38
Step27: Dataset OK but only contains only 8 peaks in lncRNA. 1 of the peaks is really wide = 40000 pb The dataset contains only the 8 peaks identify in the lncRNA... The coordinates correspond to the gen coordinates rather than the peaks themselves.. Use the dataset?? Is it really worthy to remap everything for such a low number of eventual peaks?
pseudoU_Li_hg38
Step28: No problem with this dataset
pseudoU_Schwartz_hg38
Step29: No problem with this dataset
Summary of the PTM datasets
Verify datasets homogeneous formating
Step30: OK for all the datasets
Summary of the datasets
Step31: Gene annotation of the PTM datasets
The original annotations might not be optimal, and probably not made from an uniq reference annotation file. I will reanotate all the datasets will the last version of gencodegencode.v24.long_noncoding_RNAs.gff3. I split the file in 3 to retain only genes, transcript and exons. I also got the general gencode file containing all the annotated genes in the primary assembly
gencodegencode.v24.long_noncoding_RNAs = Contains the comprehensive gene annotation of lncRNA genes on the reference chromosomes
gencode.v24.annotation = Contains the comprehensive gene annotation on the primary assembly (chromosomes and scaffolds) sequence regions
Step32: I found a python wrapper package for bedtools to manipulate bed files. I will use it to intersect my bed files containing the positions of the PTM (or peaks) and the gff3 annotation files. This will allow me to get gene names for each positions
Step33: Add the gene name and ensembl gene ID to the bed name field
Test with one file
Step34: It is working ok > looping over all the cleaned PTM files
Step35: Between 1 and 16% (excluding the weird carlile dataset) of the peaks are found in lncRNA annotated in gencode v24.
Iterate through the datasets to find the number of uniq genes found | Python Code:
# pycl imports
from pycl import *
#Std lib imports
import datetime
from glob import glob
from pprint import pprint as pp
from os.path import basename
from os import listdir, remove, rename
from os.path import abspath, basename, isdir
from collections import OrderedDict
# Third party import
import numpy as np
import scipy.stats as stats
import pylab as pl
from Bio import Entrez
# Jupyter specific imports
from IPython.core.display import display, HTML, Markdown, Image
# Pyplot tweaking
%matplotlib inline
pl.rcParams['figure.figsize'] = 30, 10 # that's default image size for this interactive session
# Jupyter display tweaking
toogle_code()
larger_display(75)
# Simplify warning reporting to lighten the notebook style
import warnings
warnings.formatwarning=(lambda message, *args: "{}\n".format(message))
# Allow to use R directly
%load_ext rpy2.ipython
# Specific helper functions
def generate_header (PMID, cell, modification, method):
h = "# Data cleaned, converted to BED6, coordinate converted to hg38 using liftOver\n"
h+= "# Maurits Evers (maurits.evers@anu.edu.au)\n"
h+= "# Data cleaned and standardized. {}\n".format(str (datetime.datetime.today()))
h+= "# Adrien Leger (aleg@ebi.ac.uk)\n"
h+= "# RNA_modification={}|Cell_type={}|Analysis_method={}|Pubmed_ID={}\n".format(modification, cell, method, PMID)
h+= "# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
return h
def file_summary(file, separator=["\t", "|"], max_items=10):
n_line = fastcount(file)
print("Filename:\t{}".format(file))
print("Total lines:\t{}\n".format(n_line))
linerange(file, range_list=[[0,9],[n_line-5, n_line-1]])
print(colsum(file, header=False, ignore_hashtag_line=True, separator=separator, max_items=max_items))
def distrib_peak_len (file, range=None, bins=50, normed=True):
h = []
for line in open (file):
if line[0] != "#":
ls = line.split("\t")
delta = abs(int(ls[1])-int(ls[2]))
h.append(delta)
h.sort()
pl.hist(h,normed=normed, range=range, bins=bins)
pl.show()
def pubmed_fetch(pmid):
Entrez.email = 'your.email@example.com'
handle = Entrez.efetch (db='pubmed', id=pmid, retmode='xml', )
return Entrez.read(handle)[0]
def pmid_to_info(pmid):
results = pubmed_fetch(pmid)
try:
title = results['MedlineCitation']['Article']['ArticleTitle']
except (KeyError, IndexError) as E:
title = "NA"
try:
first_name = results['MedlineCitation']['Article']['AuthorList'][0]['LastName']
except (KeyError, IndexError) as E:
first_name = "NA"
try:
Year = results['MedlineCitation']['Article']['ArticleDate'][0]['Year']
except (KeyError, IndexError) as E:
Year = "NA"
try:
Month = results['MedlineCitation']['Article']['ArticleDate'][0]['Month']
except (KeyError, IndexError) as E:
Month = "NA"
try:
Day = results['MedlineCitation']['Article']['ArticleDate'][0]['Day']
except (KeyError, IndexError) as E:
Day = "NA"
d = {"title":title, "first_name":first_name, "Year":Year, "Month":Month, "Day":Day}
return d
Explanation: Meta Analysis of the Datasets for the Epi² pilot project
RNA PTM DATASETS
PYTHON 3 Notebook
Adrien Leger / EMBL EBI
Starting date 23/05/2016
Import general package and definition of specific functions
End of explanation
file_summary("./PTM_Original_Datasets/DARNED_human_hg19_all_sites.txt")
Explanation: Overview of the datasets
I collected datasets from different sources. The 2 big database containing Inosine edition RADAR and DARNED as well as all the datasets cited in the recent review about lncRNA and epitranscriptomics from Shaffik et al. In addition I also have 2 recent datasets for m1A and m6A/Am6A. All the datasets need to be carefully reviewed, reformated to BED6 format, converted to hg38 reference genome and reannotated with recent genecode annotations. See overview of the datasets in the table below
|Modification|Article|Initial number of peaks found (Shafik et al)|Final number of peaks found|Number of peaks in lncRNA found (Shafik et al)|Number of uniq lncRNA (Shafik et al)|
|---|---|---|---|---|---|
|Inosine|Z Peng et al. Nat Biotechnol 30, 253-60 (2012)|22686 (22686)|21,111 |3382 (4425)|505 (846)|
|Inosine|M Sakurai et al. Genome Res 24, 522-34 (2014)|20482 (20482)|20,482 |2550 (3050)|319 (400)|
|m5C|S Hussain et al. Cell Rep 4, 255-61 (2013)|1084 (1084)|1,084 |107 (110)|39 (41)|
|m5C|V Khoddami et al. Nat Biotechnol 31, 458-64 (2013)|20553 (20533)|20,553 |1523 (1580)|39 (38)|
|m5C|JE Squires et al. Nucleic Acids Res 40, 5023-33 (2012)|10490 (10490)|10,490 |281 (1544)|112 (711)|
|m6A|D Dominissini et al. Nature 485, 201-6 (2012)|25918 (25776)|2,894 |115 (7397)|84 (6165)|
|m6A|KD Meyer et al. Cell 149, 1635-46 (2012)|4341 (4341)|4,341 |48 (57)|16 (20)|
|m6A and m6Am|B Linder et al. Nat Methods 12, 767-72 (2015)|15167 (NA)|15,167 |385 (NA)|168 (NA)|
|m1A|D Dominissini et al. Nature 530, 441-6 (2016)|32136 (NA)|19552 (HeLa:8873, HepG2:8550, HEK293:2129)|606 (NA)|338 (NA)|
|pseudouridylation|TM Carlile et al. Nature 515, 143-6 (2014)|8 (8)|8 |4 (4)|3 (3)|
|pseudouridylation|X Li et al. Nat Chem Biol 11, 592-7 (2015)|1489 (1489)|1,489 |48 (58)|44 (54)|
|pseudouridylation|S Schwartz et al. Cell 159, 148-62 (2014)|402 (396)|402 |14 (15)|10 (11)|
|Inosine|DARNED database|333216 (259654)|290,002 |24152 (23574)|1300 (1833)|
|Inosine|RADAR database|2576460 (2576289)|1,342,374 |97118 (218793)|3343 (6376)|
DARNED
Reformat DARNED Database
The file dowloaded from DARNED is not a bed file. I need to modify it to comply with the standard format
End of explanation
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
header = "# DARNED database Human all sites hg19 coordinates\n"
header+= "# Data cleaned, filtered for Inosine editing, standardized and converted to BED6 format\n"
header+= "# Adrien Leger (aleg@ebi.ac.uk) - {}\n".format(str (datetime.datetime.today()))
header+= "# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
reformat_table(
input_file = "./PTM_Original_Datasets/DARNED_human_hg19_all_sites.txt",
output_file = "./PTM_Original_Datasets/DARNED_human_hg19_inosine.bed",
init_template = [0,"\t",1,"\t",2,"\t",3,"\t",4,"\t",5,"\t",6,"\t",7,"\t",8,"\t",9],
final_template = ["chr",0,"\t",1,"\t",1,"\t",3,">",4,"|",8,"|-|",9,"|",5,"\t0\t",2],
keep_original_header = False,
header = header,
replace_internal_space = '_',
replace_null_val = "-",
filter_dict = {0:["-"],1:["-"],2:["?"],3:["T","G","C"],4:["C","T","A","U"],8:["-"],9:["-"]},
subst_dict = {4:{"G":"I"}}
)
file_summary("./PTM_Original_Datasets/DARNED_human_hg19_inosine.bed")
Explanation: DARNED is a little messy and hard to convert since the position with the same PMID/OR sample types where fused in the same Site. It makes it difficult to parse. I think it would be better if I duplicate the site with the several PMID and sample type. I just need to verify if the number of fields in PMID and cell type is similar and if they correspond to each other, ie first in cell type = first in PMID
End of explanation
d={}
with open("./PTM_Original_Datasets/DARNED_human_hg19_inosine.bed", "r") as f:
for line in f:
if line [0] !="#":
ls = supersplit(line, ["\t","|"])
n_tissue = len(ls[4].split(","))
n_PMID = len(ls[6].split(","))
key="{}:{}".format(n_tissue,n_PMID)
if key not in d:
d[key]=0
d[key]+=1
print (d)
Explanation: I reformated and filtered the database with reformat_table to be BED6 compatible. I removed the fields lacking either chromosome, position, tissue, and PID as well as with unknown strand. In addition I also selected only A>I and A>G transitions (same thing). This filtering step eliminated 47965 sites
End of explanation
infile = "./PTM_Original_Datasets/DARNED_human_hg19_inosine.bed"
outclean = "./PTM_Original_Datasets/DARNED_human_hg19_inosine_cleaned.bed"
outunclean = "./PTM_Original_Datasets/DARNED_human_hg19_inosine_unclean.bed"
with open(infile, "r") as inf, open(outclean, "w") as outf_clean, open(outunclean, "w") as outf_unclean:
init_sites = uniq = several_tissue = several_pmid = several_all = final_sites = 0
for line in inf:
if line [0] == "#":
outf_clean.write(line)
outf_unclean.write(line)
else:
init_sites += 1
ls = supersplit(line, ["\t","|"])
tissue_list = ls[4].split(",")
PMID_list = ls[6].split(",")
n_tissue = len(tissue_list)
n_PMID = len(PMID_list)
if n_tissue == 1:
# 1 PMID, 1 tissue = no problem
if n_PMID == 1:
uniq += 1
final_sites += 1
outf_clean.write(line)
# Several PMID, 1 tissue = demultiplex PMID lines
else:
several_pmid += 1
for PMID in PMID_list:
final_sites += 1
outf_clean.write("{0}\t{1}\t{2}\t{3}|{4}|{5}|{6}|{7}\t{8}\t{9}".format(
ls[0],ls[1],ls[2],ls[3],ls[4],ls[5],PMID.strip(),ls[7],ls[8],ls[9]))
else:
# 1 PMID, several tissues = demultiplex tissues lines
if n_PMID == 1:
several_tissue += 1
for tissue in tissue_list:
final_sites += 1
outf_clean.write("{0}\t{1}\t{2}\t{3}|{4}|{5}|{6}|{7}\t{8}\t{9}".format(
ls[0],ls[1],ls[2],ls[3],tissue.strip().strip("_").strip("."),ls[5],ls[6],ls[7],ls[8],ls[9]))
# Several PMID, several tissues = extract the line in uncleanable datasets
else:
several_all += 1
outf_unclean.write(line)
print("Initial sites: ", init_sites)
print("Final clean sites: ", final_sites)
print("1 PMID 1 tissu: ", uniq)
print("1 PMID ++ tissue: ", several_tissue)
print("++ PMID 1 tissue: ", several_pmid)
print("++ PMID ++ tissue: ", several_all)
file_summary(outclean)
file_summary(outunclean)
Explanation: I tried to see if the PMID and the tissue field always had the same lengh, so I could de multiplex the fused positions. The answer is no, the number of PMID and tissues could be different. However 272551 positions have only 1 tissue and 1 PMID. These are maybe not the more relialable positions but they might be more easy to interprete. The sites with only 1 PMID but several tissues can also be used. Same think for several PMID, 1 tissue. I will demultiplex them so as to have only 1 PMID and 1 tissue by site. Concerning the site with more than 1 PMID and 1 tissue, I will extract then in another backup file.
End of explanation
# Conversion to hg38 with Crossmap/liftover
lifover_chainfile = "../LiftOver_chain_files/hg19ToHg38.over.chain.gz"
input_bed = "./PTM_Original_Datasets/DARNED_human_hg19_inosine_cleaned.bed"
temp_bed = "./PTM_Original_Datasets/DARNED_human_hg38_inosine_temp.bed"
cmd = "CrossMap.py bed {} {} {}".format(lifover_chainfile, input_bed, temp_bed)
bash(cmd)
# Rewriting and updating of the header removed by Crossmap
final_bed = "./PTM_Original_Datasets/DARNED_human_hg38_inosine_cleaned.bed"
header = "# DARNED database Human all sites hg38 coordinates\n"
header+= "# Data cleaned, filtered for Inosine editing, standardized, converted to BED6 format and updated to hg38 coordinates\n"
header+= "# Adrien Leger (aleg@ebi.ac.uk) - {}\n".format(str (datetime.datetime.today()))
header+= "# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
with open (temp_bed, "r") as infile, open (final_bed, "w") as outfile:
outfile.write (header)
for line in infile:
outfile.write (line)
file_summary(final_bed)
Explanation: I only lost 4270 sites with several PMID and several tissue. Some sites where demultiplexed and I now have 290018 sites with 1 PMID and 1 tissue
Convert DARNED coordinates from hg19 to hg39 with Crossmap
Coordinate conversion using CrossMap and a hg19 tp hg38 chain file in BASH
End of explanation
file_summary("./PTM_Original_Datasets/RADAR_human_hg19_v2_primary.txt", separator=["\t"])
Explanation: The conversion resulted in the lost of 16 sites, which is neglectable compared with the 290002 sites in the database
RADAR
Reformat RADAR Database
The file dowloaded from DARNED is not a bed file. I need to modify it to comply with the standard format. The format of the mane file Human_AG_all_hg19_v2.txt is the following:
End of explanation
file_summary("./PTM_Original_Datasets/RADAR_human_hg19_v2_secondary.txt")
Explanation: I am not interested by the conservation fields, the annotation and the repetitive nature, but I will keep chromosome, position, gene and strand.
Additional information is also available in a secondary database file that I found hidden on RADAR website. The information includes the original publication and the source of the biological sample. The information is only available for around half of the sites from 4 publications. I will only retain these richly annotated sites. The same sites can have bmultiple entries in the secondary file since I could have been reported by several papers in several tissues. I will keep a line for each independantly discovered site. The coverage and editing level could be interesting to save too.
End of explanation
# Create a structured dict of dict to parse the main database file
from collections import OrderedDict
def parse_RADAR_main (file):
# Define the top level access dict
radar_dict = OrderedDict()
for line in open (file, "r"):
if line[0] != "#":
sl = line.split("\t")
assert len(sl) == 11
chromosome, position, gene, strand = sl[0].strip(), int(sl[1].strip()), sl[2].strip(), sl[3].strip()
if chromosome not in radar_dict:
radar_dict[chromosome] = OrderedDict()
# There should be only one line per position
assert position not in radar_dict[chromosome]
radar_dict[chromosome][position] = {"gene":gene,"strand":strand}
return radar_dict
# Create a class to store a line of the additional file.
from collections import OrderedDict
class Site (object):
#~~~~~~~CLASS FIELDS~~~~~~~#
# Table of correspondance reference => PMID
TITLE_TO_PMID = {
"Peng et al 2012":"22327324",
"Bahn et al 2012":"21960545",
"Ramaswami et al 2012":"22484847",
"Ramaswami et al 2013":"23291724",
}
# Table of correspondance reference => PMID
TISSUE_TO_SAMPLE = {
"Brain":"Brain",
"Illumina Bodymap":"Illumina_Bodymap",
"Lymphoblastoid cell line":"YH",
"U87 cell line":"U87MG"
}
#~~~~~~~FONDAMENTAL METHODS~~~~~~~#
# Parse a line of the aditional information file
def __init__(self, line, ):
sl = line.strip().split("\t")
self.chromosome = sl[0].split(":")[0].strip()
self.position = int(sl[0].split(":")[1].strip())
self.PMID = self.TITLE_TO_PMID[sl[1].strip()]
self.tissue = self.TISSUE_TO_SAMPLE[sl[2].strip()]
self.coverage = sl[3].strip()
self.editing_level = sl[4].strip()
# Fundamental class methods str and repr
def __repr__(self):
msg = "SITE CLASS\n"
# list all values in object dict in alphabetical order
keylist = [key for key in self.__dict__.keys()]
keylist.sort()
for key in keylist:
msg+="\t{}\t{}\n".format(key, self.__dict__[key])
return (msg)
def __str__(self):
return self.__repr__()
a = Site("chr1:1037916 Peng et al 2012 Lymphoblastoid cell line 9 66.67")
print (a)
# Create a structured dict of dict to parse the secondary database file
def parse_RADAR_secondary (file):
# Define a list to store Site object (not a dict because of redundancy)
radar_list = []
for line in open (file, "r"):
if line[0] != "#":
radar_list.append(Site(line))
# return a list sorted by chromosome and positions
return sorted(radar_list, key=lambda Site: (Site.chromosome, Site.position))
Explanation: The operation is quite complex since I will have to fuse the 2 files and extract only specific values. I need to code a specific parser. The main RADAR file will be parsed and organised as a simple embed dict. The secondary file will be the more important since I will start parsing from it to find the complementary information in the main database file. Each site will be added to a list of Site objects, that will be subsequently iterated to combine with the main database before writing in a new Bed formated file.
End of explanation
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
def reformat_RADAR (main_file, secondary_file, outfile, errfile, header):
# Read and structure the 2 database files
print("Parse the main database file")
main = parse_RADAR_main(main_file)
print("Parse the secondary database file")
secondary = parse_RADAR_secondary (secondary_file)
print("Combine the data together in a new bed formated file")
with open (outfile, "w+") as csvout, open (errfile, "w+") as errout:
# rewrite header
csvout.write(header)
fail = success = 0
for total, site in enumerate(secondary):
try:
line = "{0}\t{1}\t{1}\t{2}|{3}|{4}|{5}|{6}\t{7}\t{8}\n".format(
site.chromosome,
site.position,
"A>I",
site.tissue,
"-",
site.PMID,
main[site.chromosome][site.position]["gene"],
site.editing_level,
main[site.chromosome][site.position]["strand"],
)
csvout.write(line)
success += 1
except KeyError as E:
line = "{0}\t{1}\t{2}\t{3}\t{4}\n".format(
site.chromosome,
site.position,
site.tissue,
site.PMID,
site.editing_level
)
errout.write(line)
fail += 1
print ("{} Sites processed\t{} Sites pass\t{} Sites fail".format(total, success, fail))
header = "# RADAR database Human v2 all sites hg19 coordinates\n"
header+= "# Data cleaned, standardized and converted to BED6 format\n"
header+= "# Adrien Leger (aleg@ebi.ac.uk) - {}\n".format(str (datetime.datetime.today()))
header+= "# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
reformat_RADAR(
main_file = "./PTM_Original_Datasets/RADAR_human_hg19_v2_primary.txt",
secondary_file = "./PTM_Original_Datasets/RADAR_human_hg19_v2_secondary.txt",
outfile = "./PTM_Original_Datasets/RADAR_Human_hg19_inosine_cleaned.bed",
errfile = "./PTM_Original_Datasets/RADAR_Human_hg19_inosine_orphan.bed",
header = header)
file_summary("./PTM_Original_Datasets/RADAR_Human_hg19_inosine_cleaned.bed")
Explanation: Read the original file, reformat the field and write a new file BED6 compliant.
chrom chromstart chromend name score orient
chr4 774138 774138 A>I|LOC100129917|LUNG:LYMPHOBLASTOID_CELL_LINE|15342557:15258596:22327324 0 -
End of explanation
# Conversion to hg38 with Crossmap/liftover
lifover_chainfile = "../LiftOver_chain_files/hg19ToHg38.over.chain.gz"
input_bed = "./PTM_Original_Datasets/RADAR_Human_hg19_inosine_cleaned.bed"
temp_bed = "./PTM_Original_Datasets/RADAR_Human_hg38_inosine_temp.bed"
cmd = "CrossMap.py bed {} {} {}".format(lifover_chainfile, input_bed, temp_bed)
bash(cmd)
# Rewriting and updating of the header removed by Crossmap
final_bed = "./PTM_Original_Datasets/RADAR_Human_hg38_inosine_cleaned.bed"
header = "# RADAR database Human v2 all sites hg38 coordinates\n"
header+= "# Data cleaned, standardized, converted to BED6 format and updated to hg38 coordinates\n"
header+= "# Adrien Leger (aleg@ebi.ac.uk) - {}\n".format(str (datetime.datetime.today()))
header+= "# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
with open (temp_bed, "r") as infile, open (final_bed, "w") as outfile:
outfile.write (header)
for line in infile:
outfile.write (line)
file_summary(final_bed)
Explanation: After combining information, out of the 1343463 sites in the RADAR secondary file and 2576460 in the RADAR primary file, 1342423 consistant sites were found in both files, ie half of the database site were filtered out because they where not in the 2 database files.
Convert RADAR coordinates from hg19 to hg39 with Crossmap
Coordinate conversion using CrossMap in BASH
End of explanation
listdir("./PTM_Original_Datasets/")
Explanation: Around 10000 additional sites were lost during the conversion from hg19 to hg38
Shafik et al datasets
File list and define basic exploratory functions
The datasets a pretty cleaned and already converted to hg38 build coordinates. However, they are not strandardize and the synthax could be different since the bioinformatician in charge of the analysis tried to keep as much information as possible from the original datasets. I will strandardize the file names, the bed name fields and add reformat the file header. In addition I will explore the datasets to see if the are consistant and decide if I use them or not.
End of explanation
infile="./PTM_Original_Datasets/editing_Peng_hg38.bed"
PMID = "22327324"
cell = "YH"
modification = "A>I"
method = "A_to_I_editing"
author = "Peng"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
print(colsum(infile, colrange=[9], header=False, ignore_hashtag_line=True, separator=["\t", "|"], max_items=20, ret_type="report"))
Explanation: editing_Peng_hg38
End of explanation
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"|",7,"|",8,"|",9,"|",10,"%|",11,"|",12,"|",13,"|",14,"|",15,"|",16,"|",17,"|",18,"\t",19,"\t",20]
final_template=[0,"\t",1,"\t",2,"\t",9,"|",cell,"|",method,"|",PMID,"|",18,"\t",10,"\t",20]
# filter out all but A>G transition which are Inosine transition
filter_dict={9:["T->C","G->A","C->T","T->G","C->G","G->C","A->C","T->A","C->A","G->T","A->T"]}
# Reformat the field value A->G to A>I for standardization
subst_dict={9:{"A->G":"A>I"}}
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-",
subst_dict = subst_dict,
filter_dict = filter_dict )
file_summary(outfile)
distrib_peak_len(outfile)
Explanation: The column 9 contains more than just Inosine transitions(A>G) but also all the other editing sites they found. Here I will focuss on the Inosine only. I need to filter out all the other values
End of explanation
infile = "./PTM_Original_Datasets/editing_Sakurai_hg38.bed"
PMID = "24407955"
cell = "Brain"
modification = "A>I"
method = "ICE_seq"
author = "Sakurai"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"\t",7,"\t",8]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",7,"\t",8]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
Explanation: Contains a lot of fields, some of which I don't even have an idea of what the contains. The dataset was not filtered and contains not only A>G A>I transitions. There is a total of 22686 sites but only 21111 are A>G transitions. I filtered out all the other modifications and retained only the A>G transition.
I am not sure that I should use this dataset, especially since it is already included in the RADAR database.
editing_Sakurai_hg38
End of explanation
infile = "./PTM_Original_Datasets/m5C_Hussain_hg38.bed"
PMID = "23871666"
cell = "HEK293"
modification = "m5C"
method = "miCLIP"
author = "Hussain"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"\t",5,"\t",6]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|-\t",5,"\t",6]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
Explanation: No problem with this dataset, I kept the gene loci name for future comparison after reannotation with GENCODE
m5C_Hussain_hg38
End of explanation
infile="./PTM_Original_Datasets/m5C_Khoddami_hg38.bed"
PMID = "23604283"
cell = "MEF"
modification = "m5C"
method = "AzaIP"
author = "Khoddami"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"\t",5,"\t",6]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",5,"\t",6]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
Explanation: No problem with this dataset, Since there is no gene loci, I just filed the field with a dash to indicate that it is empty
m5C_Khoddami_hg38
End of explanation
infile="./PTM_Original_Datasets/m5C_Squires_hg38.bed"
PMID = "22344696"
cell = "HeLa"
modification = "m5C"
method = "bisulfite_seq"
author = "Squires"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"\t",5,"\t",6]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|-\t",5,"\t",6]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
Explanation: No problem with this dataset. It seems to be focussed on tRNA gene that are clearly over-represented in the gene list
m5C_Squires_hg38
End of explanation
infile="./PTM_Original_Datasets/m6A_Dominissini_hg38.bed"
PMID = "22575960"
cell = "HepG2"
modification = "m6A"
method = "M6A_seq"
author = "Dominissini"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
distrib_peak_len(infile, normed=False, bins=500)
Explanation: 1 nt wide peaks = No problem with this dataset. Since there are no gene loci, I just filed the field with a dash to indicate that it is empty
m6A_Dominissini_hg38
End of explanation
infile="./PTM_Original_Datasets/m6A_Dominissini_hg19_original_table.csv"
file_summary(infile)
distrib_peak_len(infile, normed=False, bins=500)
distrib_peak_len(infile, normed=False, range=[1,2000], bins=500)
distrib_peak_len(infile, normed=False, range=[1,200], bins=200)
Explanation: Apparently there is a problem in the data since some of the peaks can be up to 3 500 000 with is much longer than initially described in the paper. I dowloaded the original peak calling file from the sup data of the paper to compare with this datafile.
End of explanation
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
infile="./PTM_Original_Datasets/m6A_Dominissini_hg19_original_table.csv"
outfile = "./PTM_Clean_Datasets/Dominissini_m6A_HepG2_hg19_cleaned.bed"
init_template=[0,"\t",1,"\t",2,"\t",3,"\t",4]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t-\t",3]
# Predicate function to filter out large peaks
predicate = lambda val_list: abs(int(val_list[1])-int(val_list[2])) <= 1000
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-",
predicate = predicate)
file_summary(outfile)
Explanation: The same problem is also found in the original data, though the values only goes up to 2 500 000... I think that long peaks were improperly called... Looking at the data in detail we can see that most of the peaks are found in the 1 to 130 range. There is also a second smaller peak around 1000. Looking at the original article the mapping was done on the human transcriptome and not the genome. Apparently the coordinates were converted to the genome after and that might explain this decrepancy. I have 2 options => Starting from scratch with a recent genome build, but the dataset seems quite tricky and I am not sure I could do it as well as the original authors. The second option is to be retrictive and keep only the small peaks ie > 1000 pb. I think that I will start by this alternative and go back to the data again if needed. To be sure of the quality of the data I will start from the original data and do the liftover conversion myself.
End of explanation
# Conversion to hg38 with Crossmap/liftover
lifover_chainfile = "../LiftOver_chain_files/hg19ToHg38.over.chain.gz"
input_bed = "./PTM_Clean_Datasets/Dominissini_m6A_HepG2_hg19_cleaned.bed"
temp_bed = "./PTM_Clean_Datasets/Dominissini_m6A_HepG2_hg38_temp.bed"
cmd = "CrossMap.py bed {} {} {}".format(lifover_chainfile, input_bed, temp_bed)
bash(cmd)
# Rewriting and updating of the header removed by Crossmap
final_bed = "./PTM_Clean_Datasets/Dominissini_m6A_HepG2_hg38_cleaned.bed"
header = generate_header(PMID, cell, modification, method)
with open (temp_bed, "r") as infile, open (final_bed, "w") as outfile:
outfile.write (header)
for line in infile:
outfile.write (line)
file_summary(final_bed)
distrib_peak_len(final_bed, normed=False, bins=200)
Explanation: The filtering based on the peak length size remove a lot of peaks = nearly 90 % of the dataset
End of explanation
infile="./PTM_Original_Datasets/m6A_Meyer_hg38.bed"
PMID = "22608085"
cell = "HEK293"
modification = "m6A"
method = "MeRIP_Seq"
author = "Meyer"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"\t",6,"\t",7]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",6,"\t",7]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile, bins = 100)
Explanation: It is much better but I only have 2894 sites out of the 25000 initially in the dataset. For this first exploratory study it should be OK, but I will probably have to go back to the original data again later
m6A_Meyer_hg38
End of explanation
infile="./PTM_Original_Datasets/miCLIP_m6A_Linder2015_hg38.bed"
PMID = "26121403"
cell = "HEK293"
modification = "m6A:m6Am"
method = "miCLIP"
author = "Linder"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"\t",4,"\t",5]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|-\t",4,"\t",5]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
Explanation: The original dataset was OK and clean. Contrary to the previous dataset the width of the peaks are between 100 and 220 pb wich is clearly better. It is however interesting to notice that the peak lengths are not randomly distributed
miCLIP_m6A_Linder2015_hg38
End of explanation
infile="./PTM_Original_Datasets/MeRIPseq_m1A_Dominissini2016_hg38.bed"
PMID = "26863196"
cell = "HeLa:HEK293:HepG2"
modification = "m1A"
method = "M1A_seq"
author = "Dominissini"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
Explanation: The name field is unusable since it contains a random number of fieds... I cannot parse it easily. That is not a big problem since I will reannotate the data based on the last gencode annotation realease, I did not save any of the informations contained in the original name field. With miCLIP data the peak are 1 nt wide
MeRIPseq_m1A_Dominissini2016_hg38
End of explanation
###### chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"\t",6,"\t",7]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",5,"|",method,"|",PMID,"|",3,"\t",6,"\t",7]
# filter out all but A>G transition which are Inosine transition
filter_dict={5:["HEPG2_heat_shock_4h","HEPG2_Glucose_starv_4h","HEPG2_common_total_RNA"]}
# Reformat the field value A->G to A>I for standardization
subst_dict={5:{"HEPG2_common_mRNA":"HepG2"}}
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-",
subst_dict = subst_dict,
filter_dict = filter_dict
)
file_summary(outfile)
distrib_peak_len(outfile)
Explanation: I don't know understand some of the categories for HEPG2 cells
* Hela = No ambiguity
* HEK293 = No ambiguity
* HEPG2_common_mRNA = Unclear, but aparently looking at the data this is the general RNA dataset (including ncRNA) after peak calling in untreated HEPG2 cells
* HEPG2_heat_shock_4h = No ambiguity
* HEPG2_Glucose_starv_4h = No ambiguity
* HEPG2_common_total_RNA = Unclear, but it seems to be commons peaks shared by all the cell types.
End of explanation
infile="./PTM_Original_Datasets/pseudoU_Carlile_hg38.bed"
PMID = "25192136"
cell = "HeLa"
modification = "Y"
method = "Pseudo_seq"
author = "Carlile"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"\t",5,"\t",6]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",5,"\t",6]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
Explanation: pseudoU_Carlile_hg38
End of explanation
infile="./PTM_Original_Datasets/pseudoU_Li_hg38.bed"
PMID = "26075521"
cell = "HEK293"
modification = "Y"
method = "CeU_Seq"
author = "Li"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"\t",7,"\t",8]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",7,"\t",8]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
Explanation: Dataset OK but only contains only 8 peaks in lncRNA. 1 of the peaks is really wide = 40000 pb The dataset contains only the 8 peaks identify in the lncRNA... The coordinates correspond to the gen coordinates rather than the peaks themselves.. Use the dataset?? Is it really worthy to remap everything for such a low number of eventual peaks?
pseudoU_Li_hg38
End of explanation
infile="./PTM_Original_Datasets/pseudoU_Schwartz_hg38.bed"
PMID = "25219674"
cell = "HEK293:Fibroblast"
modification = "Y"
method = "Psi-seq"
author = "Schwartz"
outfile = "./PTM_Clean_Datasets/{}_{}_{}_hg38_cleaned.bed".format(author, modification, cell, method)
file_summary(infile)
# chrom chromstart chromend modif|cell_type|method|PMID|loci score strand\n"
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"\t",5,"\t",6]
final_template=[0,"\t",1,"\t",2,"\t",modification,"|",cell,"|",method,"|",PMID,"|",4,"\t",5,"\t",6]
reformat_table(
input_file=infile,
output_file=outfile,
init_template=init_template,
final_template=final_template,
keep_original_header = False,
header = generate_header(PMID, cell, modification, method),
replace_internal_space='_',
replace_null_val="-")
file_summary(outfile)
distrib_peak_len(outfile)
Explanation: No problem with this dataset
pseudoU_Schwartz_hg38
End of explanation
for f in sorted(glob("./PTM_Clean_Datasets/*.bed")):
print (f)
linerange(f, [[10,12]])
Explanation: No problem with this dataset
Summary of the PTM datasets
Verify datasets homogeneous formating
End of explanation
for f in sorted(glob("./PTM_Clean_Datasets/*.bed")):
print ("\n", "-"*100)
print ("Dataset Name\t{}".format(basename(f)))
print ("Number sites\t{}".format(simplecount(f, ignore_hashtag_line=True)))
a = colsum(
f,
colrange = [3,4,5,6],
header=False,
ignore_hashtag_line=True,
separator=["\t", "|"],
max_items=20,
ret_type="dict"
)
# Get more info via pubmed
print ("PMID")
for pmid,count in a[6].items():
pubmed_info = pmid_to_info(pmid)
print ("\t*{}\t{}\n\t {}. et al, {}\{}\{}\n\t {}".format(
pmid,count,
pubmed_info["first_name"],
pubmed_info["Year"],
pubmed_info["Month"],
pubmed_info["Day"],
pubmed_info["title"]))
# Simple listing for the other fields
for title, col in [["RNA PTM",3],["Tissue/cell",4],["Method",5]]:
print (title)
print(dict_to_report(a[col], ntab=1, max_items=10, tab="\t", sep="\t"))
Explanation: OK for all the datasets
Summary of the datasets
End of explanation
# New dir to create annotated files
mkdir("PTM_Annotated_Datasets")
mkdir("Test")
Explanation: Gene annotation of the PTM datasets
The original annotations might not be optimal, and probably not made from an uniq reference annotation file. I will reanotate all the datasets will the last version of gencodegencode.v24.long_noncoding_RNAs.gff3. I split the file in 3 to retain only genes, transcript and exons. I also got the general gencode file containing all the annotated genes in the primary assembly
gencodegencode.v24.long_noncoding_RNAs = Contains the comprehensive gene annotation of lncRNA genes on the reference chromosomes
gencode.v24.annotation = Contains the comprehensive gene annotation on the primary assembly (chromosomes and scaffolds) sequence regions
End of explanation
help(reformat_table)
head("../../Reference_Annotation/gencode_v24.gff3.gz")
import pybedtools
annotation_file = "../../Reference_Annotation/gencode_v24.gff3"
peak_file = "./PTM_Clean_Datasets/Dominissini_m6A_HepG2_hg38_cleaned.bed"
output_file = "./test.bed"
peak = pybedtools.BedTool(peak_file)
annotation = pybedtools.BedTool(annotation_file)
intersection = peak.intersect(annotation, wo=True, s=True)
# Reformat the file generated by pybedtools to a simple Bed format
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"|",7,"\t",8,"\t",9,"\t",10,"\t",11,"\t",12,"\t",13,
"\t",14,"\t",15,"\t",16,"\t",17,"\tID=",18,";gene_id=",19,";gene_type=",20,";gene_status=",21,
";gene_name=",22,";level=",23,";havana_gene=",24]
final_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"|",18,"|",20,"|",22,"\t",8,"\t",9]
print(intersection.head())
print("Post processing results")
reformat_table(
input_file=intersection.fn,
output_file=output_file,
init_template=init_template,
final_template=final_template,
replace_internal_space='_',
replace_null_val="-",
keep_original_header = False,
predicate = lambda v: v[12] == "gene"
)
head(output_file)
import pybedtools
def intersect_extract_genecodeID (annotation_file, peak_file, outdir):
output_file = "{}/{}_{}.bed".format(outdir, file_basename(peak_file), file_basename(annotation_file))
genecount_file = "{}/{}_{}_uniq-gene.csv".format(outdir, file_basename(peak_file), file_basename(annotation_file))
site_file = "{}/{}_{}_uniq-sites.csv".format(outdir, file_basename(peak_file), file_basename(annotation_file))
peak = pybedtools.BedTool(peak_file)
annotation = pybedtools.BedTool(annotation_file)
# Intersect the 2 files with pybedtools
print("Intersecting {} with {}".format(file_basename(peak_file), file_basename(annotation_file)))
intersection = peak.intersect(annotation, wo=True, s=True)
# Reformat the file generated by pybedtools to a simple Bed format
init_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"|",7,"\t",8,"\t",9,"\t",10,"\t",11,"\t",12,"\t",13,
"\t",14,"\t",15,"\t",16,"\t",17,"\tID=",18,";gene_id=",19,";gene_type=",20,";gene_status=",21,
";gene_name=",22,";level=",23,";havana_gene=",24]
final_template=[0,"\t",1,"\t",2,"\t",3,"|",4,"|",5,"|",6,"|",18,"|",20,"|",22,"\t",8,"\t",9]
h = "# Data cleaned, converted to BED6, standardized and coordinates converted to hg38 using liftOver\n"
h+= "# Overlaping gene annotation with gencodev24\n"
h+= "# Adrien Leger (aleg@ebi.ac.uk) {}\n".format(str (datetime.datetime.today()))
h+= "# chrom\tchromstart\tchromend\tmodif|cell_type|method|PMID|ensembl_id|gene_type|gene_name\tscore\tstrand\n"
print("Post processing results")
reformat_table(
input_file=intersection.fn,
output_file=output_file,
init_template=init_template,
final_template=final_template,
replace_internal_space='_',
replace_null_val="-",
header = h,
keep_original_header = False,
predicate = lambda v: v[12] == "gene"
)
# Count the number of lines in the initial and final peak files
i, j = simplecount(peak_file, ignore_hashtag_line=True), simplecount(output_file, ignore_hashtag_line=True)
print("Total initial positions: {}\tTotal final positions: {}".format(i, j))
# Count uniq gene id and uniq positions found in the dataset
geneid_dict = OrderedDict()
coord_dict = OrderedDict()
with open (output_file, "r") as fp:
for line in fp:
if line[0] != "#":
sl= supersplit(line, separator=["\t", "|"])
# write gene id, gene_type,
gene_id = "{}\t{}\t{}".format(sl[7],sl[9],sl[8])
if gene_id not in geneid_dict:
geneid_dict[gene_id] = 0
geneid_dict[gene_id] += 1
coord = "{}:{}-{}".format(sl[0],sl[1],sl[2])
if coord not in coord_dict:
coord_dict[coord] = 0
coord_dict[coord] += 1
print ("Uniq genes found\t{}\nUniq position found\t{}\n".format(
len(geneid_dict.values()), len(coord_dict.values()) ))
# Write each gene id found with the number of time found
with open (genecount_file, "w") as fp:
fp.write(dict_to_report (geneid_dict, max_items=0, sep="\t"))
# Write each gene id found with the number of time found
with open (site_file, "w") as fp:
fp.write(dict_to_report (coord_dict, max_items=0, sep="\t"))
Explanation: I found a python wrapper package for bedtools to manipulate bed files. I will use it to intersect my bed files containing the positions of the PTM (or peaks) and the gff3 annotation files. This will allow me to get gene names for each positions
End of explanation
annotation_file = "../../Reference_Annotation/gencode_v24.gff3"
peak_file = "./PTM_Clean_Datasets/DARNED_human_hg38_inosine_cleaned.bed"
outdir = "./"
output_file = "./DARNED_human_hg38_inosine_cleaned_gencode_v24.bed"
genecount_file = "./DARNED_human_hg38_inosine_cleaned_gencode_v24_uniq-gene.csv"
site_file = "./DARNED_human_hg38_inosine_cleaned_gencode_v24_uniq-sites.csv"
intersect_extract_genecodeID (annotation_file, peak_file, outdir)
file_summary(output_file, separator=["\t","|"])
head (genecount_file)
head (site_file)
remove(output_file)
remove(genecount_file)
remove(site_file)
Explanation: Add the gene name and ensembl gene ID to the bed name field
Test with one file
End of explanation
# Annotation vs gencodev23 lncRNA genes
annotation_file = '/home/aleg/Data/Reference_Annotation/gencode_v24_lncRNAs.gff3'
for peak_file in sorted(glob("./PTM_Clean_Datasets/*.bed")):
outdir = "./PTM_Annotated_Datasets"
intersect_extract_genecodeID (annotation_file, peak_file, outdir)
Explanation: It is working ok > looping over all the cleaned PTM files
End of explanation
# Annotation vs gencodev23 All genes
annotation_file = "/home/aleg/Data/Reference_Annotation/gencode_v24.gff3"
for peak_file in sorted(glob("./PTM_Clean_Datasets/*.bed")):
outdir = "./PTM_Annotated_Datasets"
intersect_extract_genecodeID (annotation_file, peak_file, outdir)
Explanation: Between 1 and 16% (excluding the weird carlile dataset) of the peaks are found in lncRNA annotated in gencode v24.
Iterate through the datasets to find the number of uniq genes found
End of explanation |
8,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: 基本分类:对服装图像进行分类
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 导入 Fashion MNIST 数据集
本指南使用 Fashion MNIST 数据集,该数据集包含 10 个类别的 70,000 个灰度图像。这些图像以低分辨率(28x28 像素)展示了单件衣物,如下所示:
<table>
<tr><td> <img alt="Fashion MNIST sprite" src="https
Step3: 加载数据集会返回四个 NumPy 数组:
train_images 和 train_labels 数组是训练集,即模型用于学习的数据。
测试集、test_images 和 test_labels 数组会被用来对模型进行测试。
图像是 28x28 的 NumPy 数组,像素值介于 0 到 255 之间。标签是整数数组,介于 0 到 9 之间。这些标签对应于图像所代表的服装类:
<table>
<tr>
<th>标签</th>
<th>类</th>
</tr>
<tr>
<td>0</td>
<td>T恤/上衣</td>
</tr>
<tr>
<td>1</td>
<td>裤子</td>
</tr>
<tr>
<td>2</td>
<td>套头衫</td>
</tr>
<tr>
<td>3</td>
<td>连衣裙</td>
</tr>
<tr>
<td>4</td>
<td>外套</td>
</tr>
<tr>
<td>5</td>
<td>凉鞋</td>
</tr>
<tr>
<td>6</td>
<td>衬衫</td>
</tr>
<tr>
<td>7</td>
<td>运动鞋</td>
</tr>
<tr>
<td>8</td>
<td>包</td>
</tr>
<tr>
<td>9</td>
<td>短靴</td>
</tr>
</table>
每个图像都会被映射到一个标签。由于数据集不包括类名称,请将它们存储在下方,供稍后绘制图像时使用:
Step4: 浏览数据
在训练模型之前,我们先浏览一下数据集的格式。以下代码显示训练集中有 60,000 个图像,每个图像由 28 x 28 的像素表示:
Step5: 同样,训练集中有 60,000 个标签:
Step6: 每个标签都是一个 0 到 9 之间的整数:
Step7: 测试集中有 10,000 个图像。同样,每个图像都由 28x28 个像素表示:
Step8: 测试集包含 10,000 个图像标签:
Step9: 预处理数据
在训练网络之前,必须对数据进行预处理。如果您检查训练集中的第一个图像,您会看到像素值处于 0 到 255 之间:
Step10: 将这些值缩小至 0 到 1 之间,然后将其馈送到神经网络模型。为此,请将这些值除以 255。请务必以相同的方式对训练集和测试集进行预处理:
Step11: 为了验证数据的格式是否正确,以及您是否已准备好构建和训练网络,让我们显示训练集中的前 25 个图像,并在每个图像下方显示类名称。
Step12: 构建模型
构建神经网络需要先配置模型的层,然后再编译模型。
设置层
神经网络的基本组成部分是层。层会从向其馈送的数据中提取表示形式。希望这些表示形式有助于解决手头上的问题。
大多数深度学习都包括将简单的层链接在一起。大多数层(如 tf.keras.layers.Dense)都具有在训练期间才会学习的参数。
Step13: 该网络的第一层 tf.keras.layers.Flatten 将图像格式从二维数组(28 x 28 像素)转换成一维数组(28 x 28 = 784 像素)。将该层视为图像中未堆叠的像素行并将其排列起来。该层没有要学习的参数,它只会重新格式化数据。
展平像素后,网络会包括两个 tf.keras.layers.Dense 层的序列。它们是密集连接或全连接神经层。第一个 Dense 层有 128 个节点(或神经元)。第二个(也是最后一个)层会返回一个长度为 10 的 logits 数组。每个节点都包含一个得分,用来表示当前图像属于 10 个类中的哪一类。
编译模型
在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:
损失函数 - 用于测量模型在训练期间的准确率。您会希望最小化此函数,以便将模型“引导”到正确的方向上。
优化器 - 决定模型如何根据其看到的数据和自身的损失函数进行更新。
指标 - 用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
Step14: 训练模型
训练神经网络模型需要执行以下步骤:
将训练数据馈送给模型。在本例中,训练数据位于 train_images 和 train_labels 数组中。
模型学习将图像和标签关联起来。
要求模型对测试集(在本例中为 test_images 数组)进行预测。
验证预测是否与 test_labels 数组中的标签相匹配。
向模型馈送数据
要开始训练,请调用 model.fit 方法,这样命名是因为该方法会将模型与训练数据进行“拟合”:
Step15: 在模型训练期间,会显示损失和准确率指标。此模型在训练数据上的准确率达到了 0.91(或 91%)左右。
评估准确率
接下来,比较模型在测试数据集上的表现:
Step16: 结果表明,模型在测试数据集上的准确率略低于训练数据集。训练准确率和测试准确率之间的差距代表过拟合。过拟合是指机器学习模型在新的、以前未曾见过的输入上的表现不如在训练数据上的表现。过拟合的模型会“记住”训练数据集中的噪声和细节,从而对模型在新数据上的表现产生负面影响。有关更多信息,请参阅以下内容:
演示过拟合
避免过拟合的策略
进行预测
在模型经过训练后,您可以使用它对一些图像进行预测。模型具有线性输出,即 logits。您可以附加一个 softmax 层,将 logits 转换成更容易理解的概率。
Step17: 在上例中,模型预测了测试集中每个图像的标签。我们来看看第一个预测结果:
Step18: 预测结果是一个包含 10 个数字的数组。它们代表模型对 10 种不同服装中每种服装的“置信度”。您可以看到哪个标签的置信度值最大:
Step19: 因此,该模型非常确信这个图像是短靴,或 class_names[9]。通过检查测试标签发现这个分类是正确的:
Step20: 您可以将其绘制成图表,看看模型对于全部 10 个类的预测。
Step21: 验证预测结果
在模型经过训练后,您可以使用它对一些图像进行预测。
我们来看看第 0 个图像、预测结果和预测数组。正确的预测标签为蓝色,错误的预测标签为红色。数字表示预测标签的百分比(总计为 100)。
Step22: 让我们用模型的预测绘制几张图像。请注意,即使置信度很高,模型也可能出错。
Step23: 使用训练好的模型
最后,使用训练好的模型对单个图像进行预测。
Step24: tf.keras 模型经过了优化,可同时对一个批或一组样本进行预测。因此,即便您只使用一个图像,您也需要将其添加到列表中:
Step25: 现在预测这个图像的正确标签:
Step26: keras.Model.predict 会返回一组列表,每个列表对应一批数据中的每个图像。在批次中获取对我们(唯一)图像的预测: | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
Explanation: 基本分类:对服装图像进行分类
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/tutorials/keras/classification" class=""><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" class="">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/classification.ipynb" class=""><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" class="">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/classification.ipynb" class=""><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" class="">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/keras/classification.ipynb" class=""><img src="https://tensorflow.google.cn/images/download_logo_32px.png" class="">下载笔记本</a></td>
</table>
本指南将训练一个神经网络模型,对运动鞋和衬衫等服装图像进行分类。即使您不理解所有细节也没关系;这只是对完整 TensorFlow 程序的快速概述,详细内容会在您实际操作的同时进行介绍。
本指南使用了 tf.keras,它是 TensorFlow 中用来构建和训练模型的高级 API。
End of explanation
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
Explanation: 导入 Fashion MNIST 数据集
本指南使用 Fashion MNIST 数据集,该数据集包含 10 个类别的 70,000 个灰度图像。这些图像以低分辨率(28x28 像素)展示了单件衣物,如下所示:
<table>
<tr><td> <img alt="Fashion MNIST sprite" src="https://tensorflow.google.cn/images/fashion-mnist-sprite.png" class=""> </td></tr>
<tr><td align="center"> <b>图 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST 样本</a>(由 Zalando 提供,MIT 许可)。<br>
</td></tr>
</table>
Fashion MNIST 旨在临时替代经典 MNIST 数据集,后者常被用作计算机视觉机器学习程序的“Hello, World”。MNIST 数据集包含手写数字(0、1、2 等)的图像,其格式与您将使用的衣物图像的格式相同。
本指南使用 Fashion MNIST 来实现多样化,因为它比常规 MNIST 更具挑战性。这两个数据集都相对较小,都用于验证某个算法是否按预期工作。对于代码的测试和调试,它们都是很好的起点。
在本指南中,我们使用 60,000 个图像来训练网络,使用 10,000 个图像来评估网络学习对图像分类的准确率。您可以直接从 TensorFlow 访问 Fashion MNIST。请运行以下代码,直接从 TensorFlow 中导入和加载 Fashion MNIST 数据:
End of explanation
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
Explanation: 加载数据集会返回四个 NumPy 数组:
train_images 和 train_labels 数组是训练集,即模型用于学习的数据。
测试集、test_images 和 test_labels 数组会被用来对模型进行测试。
图像是 28x28 的 NumPy 数组,像素值介于 0 到 255 之间。标签是整数数组,介于 0 到 9 之间。这些标签对应于图像所代表的服装类:
<table>
<tr>
<th>标签</th>
<th>类</th>
</tr>
<tr>
<td>0</td>
<td>T恤/上衣</td>
</tr>
<tr>
<td>1</td>
<td>裤子</td>
</tr>
<tr>
<td>2</td>
<td>套头衫</td>
</tr>
<tr>
<td>3</td>
<td>连衣裙</td>
</tr>
<tr>
<td>4</td>
<td>外套</td>
</tr>
<tr>
<td>5</td>
<td>凉鞋</td>
</tr>
<tr>
<td>6</td>
<td>衬衫</td>
</tr>
<tr>
<td>7</td>
<td>运动鞋</td>
</tr>
<tr>
<td>8</td>
<td>包</td>
</tr>
<tr>
<td>9</td>
<td>短靴</td>
</tr>
</table>
每个图像都会被映射到一个标签。由于数据集不包括类名称,请将它们存储在下方,供稍后绘制图像时使用:
End of explanation
train_images.shape
Explanation: 浏览数据
在训练模型之前,我们先浏览一下数据集的格式。以下代码显示训练集中有 60,000 个图像,每个图像由 28 x 28 的像素表示:
End of explanation
len(train_labels)
Explanation: 同样,训练集中有 60,000 个标签:
End of explanation
train_labels
Explanation: 每个标签都是一个 0 到 9 之间的整数:
End of explanation
test_images.shape
Explanation: 测试集中有 10,000 个图像。同样,每个图像都由 28x28 个像素表示:
End of explanation
len(test_labels)
Explanation: 测试集包含 10,000 个图像标签:
End of explanation
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
Explanation: 预处理数据
在训练网络之前,必须对数据进行预处理。如果您检查训练集中的第一个图像,您会看到像素值处于 0 到 255 之间:
End of explanation
train_images = train_images / 255.0
test_images = test_images / 255.0
Explanation: 将这些值缩小至 0 到 1 之间,然后将其馈送到神经网络模型。为此,请将这些值除以 255。请务必以相同的方式对训练集和测试集进行预处理:
End of explanation
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
Explanation: 为了验证数据的格式是否正确,以及您是否已准备好构建和训练网络,让我们显示训练集中的前 25 个图像,并在每个图像下方显示类名称。
End of explanation
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
Explanation: 构建模型
构建神经网络需要先配置模型的层,然后再编译模型。
设置层
神经网络的基本组成部分是层。层会从向其馈送的数据中提取表示形式。希望这些表示形式有助于解决手头上的问题。
大多数深度学习都包括将简单的层链接在一起。大多数层(如 tf.keras.layers.Dense)都具有在训练期间才会学习的参数。
End of explanation
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: 该网络的第一层 tf.keras.layers.Flatten 将图像格式从二维数组(28 x 28 像素)转换成一维数组(28 x 28 = 784 像素)。将该层视为图像中未堆叠的像素行并将其排列起来。该层没有要学习的参数,它只会重新格式化数据。
展平像素后,网络会包括两个 tf.keras.layers.Dense 层的序列。它们是密集连接或全连接神经层。第一个 Dense 层有 128 个节点(或神经元)。第二个(也是最后一个)层会返回一个长度为 10 的 logits 数组。每个节点都包含一个得分,用来表示当前图像属于 10 个类中的哪一类。
编译模型
在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:
损失函数 - 用于测量模型在训练期间的准确率。您会希望最小化此函数,以便将模型“引导”到正确的方向上。
优化器 - 决定模型如何根据其看到的数据和自身的损失函数进行更新。
指标 - 用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
End of explanation
model.fit(train_images, train_labels, epochs=10)
Explanation: 训练模型
训练神经网络模型需要执行以下步骤:
将训练数据馈送给模型。在本例中,训练数据位于 train_images 和 train_labels 数组中。
模型学习将图像和标签关联起来。
要求模型对测试集(在本例中为 test_images 数组)进行预测。
验证预测是否与 test_labels 数组中的标签相匹配。
向模型馈送数据
要开始训练,请调用 model.fit 方法,这样命名是因为该方法会将模型与训练数据进行“拟合”:
End of explanation
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
Explanation: 在模型训练期间,会显示损失和准确率指标。此模型在训练数据上的准确率达到了 0.91(或 91%)左右。
评估准确率
接下来,比较模型在测试数据集上的表现:
End of explanation
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
Explanation: 结果表明,模型在测试数据集上的准确率略低于训练数据集。训练准确率和测试准确率之间的差距代表过拟合。过拟合是指机器学习模型在新的、以前未曾见过的输入上的表现不如在训练数据上的表现。过拟合的模型会“记住”训练数据集中的噪声和细节,从而对模型在新数据上的表现产生负面影响。有关更多信息,请参阅以下内容:
演示过拟合
避免过拟合的策略
进行预测
在模型经过训练后,您可以使用它对一些图像进行预测。模型具有线性输出,即 logits。您可以附加一个 softmax 层,将 logits 转换成更容易理解的概率。
End of explanation
predictions[0]
Explanation: 在上例中,模型预测了测试集中每个图像的标签。我们来看看第一个预测结果:
End of explanation
np.argmax(predictions[0])
Explanation: 预测结果是一个包含 10 个数字的数组。它们代表模型对 10 种不同服装中每种服装的“置信度”。您可以看到哪个标签的置信度值最大:
End of explanation
test_labels[0]
Explanation: 因此,该模型非常确信这个图像是短靴,或 class_names[9]。通过检查测试标签发现这个分类是正确的:
End of explanation
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
Explanation: 您可以将其绘制成图表,看看模型对于全部 10 个类的预测。
End of explanation
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
Explanation: 验证预测结果
在模型经过训练后,您可以使用它对一些图像进行预测。
我们来看看第 0 个图像、预测结果和预测数组。正确的预测标签为蓝色,错误的预测标签为红色。数字表示预测标签的百分比(总计为 100)。
End of explanation
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
Explanation: 让我们用模型的预测绘制几张图像。请注意,即使置信度很高,模型也可能出错。
End of explanation
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
Explanation: 使用训练好的模型
最后,使用训练好的模型对单个图像进行预测。
End of explanation
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
Explanation: tf.keras 模型经过了优化,可同时对一个批或一组样本进行预测。因此,即便您只使用一个图像,您也需要将其添加到列表中:
End of explanation
predictions_single = probability_model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
Explanation: 现在预测这个图像的正确标签:
End of explanation
np.argmax(predictions_single[0])
Explanation: keras.Model.predict 会返回一组列表,每个列表对应一批数据中的每个图像。在批次中获取对我们(唯一)图像的预测:
End of explanation |
8,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 1
Step1: Let's take a cursory glance at the data to see what we're working with.
Step2: There's a lot of data that we don't care about. For example, 'PassAttempt' is a binary attribute, but there's also an attribute called 'PlayType' which is set to 'Pass' for a passing play.
We define a list of the columns which we're not interested in, and then we delete them
Step3: We can then grab a list of the remaining column names
Step4: Temporary simple data replacement so that we can cast to integers (instead of objects)
Step5: At this point, lots of things are encoded as objects, or with excesively large data types
Step6: We define four lists based on the types of features we're using.
Binary features are separated from the other categorical features so that they can be stored in less space
Step7: We then cast all of the columns to the appropriate underlying data types
Step8: THIS IS SOME MORE REFORMATTING SHIT I'M DOING FOR NOW. PROLLY GONNA KEEP IT
Step9: Now all of the objects are encoded the way we'd like them to be
Step10: Now we can start to take a look at what's in each of our columns
Step11: Look at the number of yards gained by a FirstDown
Step12: Group by play type
Step13: We can eliminate combos who didn't have at least 10 receptions together, and then re-sample the data. This will remove noise from QB-receiver combos who have very high or low completion rates because they've played very little together.
Step14: We can also extract the highest-completion percentage combos.
Here we take the top-10 most reliable QB-receiver pairs. | Python Code:
import pandas as pd
import numpy as np
df = pd.read_csv('data/data.csv') # read in the csv file
Explanation: Lab 1: Exploring NFL Play-By-Play Data
Data Loading and Preprocessing
To begin, we load the data into a Pandas data frame from a csv file.
End of explanation
df.head()
Explanation: Let's take a cursory glance at the data to see what we're working with.
End of explanation
columns_to_delete = ['Unnamed: 0', 'Date', 'time',
'PosTeamScore', 'PassAttempt', 'RushAttempt',
'DefTeamScore', 'Season', 'PlayAttempted']
#Iterate through and delete the columns we don't want
for col in columns_to_delete:
if col in df:
del df[col]
Explanation: There's a lot of data that we don't care about. For example, 'PassAttempt' is a binary attribute, but there's also an attribute called 'PlayType' which is set to 'Pass' for a passing play.
We define a list of the columns which we're not interested in, and then we delete them
End of explanation
df.columns
Explanation: We can then grab a list of the remaining column names
End of explanation
df.info()
df = df.replace(to_replace=np.nan,value=-1)
Explanation: Temporary simple data replacement so that we can cast to integers (instead of objects)
End of explanation
df.info()
Explanation: At this point, lots of things are encoded as objects, or with excesively large data types
End of explanation
continuous_features = ['TimeSecs', 'PlayTimeDiff', 'yrdln', 'yrdline100',
'ydstogo', 'ydsnet', 'Yards.Gained', 'Penalty.Yards',
'ScoreDiff', 'AbsScoreDiff']
ordinal_features = ['Drive', 'qtr', 'down']
binary_features = ['GoalToGo', 'FirstDown','sp', 'Touchdown', 'Safety', 'Fumble']
categorical_features = df.columns.difference(continuous_features).difference(ordinal_features)
Explanation: We define four lists based on the types of features we're using.
Binary features are separated from the other categorical features so that they can be stored in less space
End of explanation
df[continuous_features] = df[continuous_features].astype(np.float64)
df[ordinal_features] = df[ordinal_features].astype(np.int64)
df[binary_features] = df[binary_features].astype(np.int8)
Explanation: We then cast all of the columns to the appropriate underlying data types
End of explanation
df['PassOutcome'].replace(['Complete', 'Incomplete Pass'], [1, 0], inplace=True)
df = df[df["PlayType"] != 'Quarter End']
df = df[df["PlayType"] != 'Two Minute Warning']
df = df[df["PlayType"] != 'End of Game']
Explanation: THIS IS SOME MORE REFORMATTING SHIT I'M DOING FOR NOW. PROLLY GONNA KEEP IT
End of explanation
df.info()
Explanation: Now all of the objects are encoded the way we'd like them to be
End of explanation
df.describe()
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter('ignore', DeprecationWarning)
#Embed figures in the Jupyter Notebook
%matplotlib inline
#Use GGPlot style for matplotlib
plt.style.use('ggplot')
pass_plays = df[df['PlayType'] == "Pass"]
pass_plays_grouped = pass_plays.groupby(by=['Passer'])
Explanation: Now we can start to take a look at what's in each of our columns
End of explanation
first_downs_grouped = df.groupby(by=['FirstDown'])
print(first_downs_grouped['Yards.Gained'].count())
print("-----------------------------")
print(first_downs_grouped['Yards.Gained'].sum())
print("-----------------------------")
print(first_downs_grouped['Yards.Gained'].sum()/first_downs_grouped['Yards.Gained'].count())
Explanation: Look at the number of yards gained by a FirstDown
End of explanation
plays_grouped = df.groupby(by=['PlayType'])
print(plays_grouped['Yards.Gained'].count())
print("-----------------------------")
print(plays_grouped['Yards.Gained'].sum())
print("-----------------------------")
print(plays_grouped['Yards.Gained'].sum()/plays_grouped['Yards.Gained'].count())
Explanation: Group by play type
End of explanation
size = 10
corr = df.corr()
fig, ax = plt.subplots(figsize=(size, size))
ax.matshow(corr)
plt.xticks(range(len(corr.columns)), corr.columns)
for tick in ax.get_xticklabels():
tick.set_rotation(90)
plt.yticks(range(len(corr.columns)), corr.columns)
Explanation: We can eliminate combos who didn't have at least 10 receptions together, and then re-sample the data. This will remove noise from QB-receiver combos who have very high or low completion rates because they've played very little together.
End of explanation
import seaborn as sns
%matplotlib inline
# df_dropped = df.dropna()
# df_dropped.info()
selected_types = df.select_dtypes(exclude=["object"])
useful_attributes = df[['FieldGoalDistance','ydstogo']]
print(useful_attributes)
sns.heatmap(corr)
cluster_corr = sns.clustermap(corr)
plt.setp(cluster_corr.ax_heatmap.yaxis.get_majorticklabels(), rotation=0)
# plt.xticks(rotation=90)
fg_analysis = df[['FieldGoalDistance','FieldGoalResult', 'PlayType']]
fg_analysis = fg_analysis[fg_analysis['FieldGoalResult'] != -1]
fg_grouped = fg_analysis.groupby(by=["FieldGoalResult"])
print(fg_grouped.sum()/fg_grouped.count())
sns.violinplot(x="FieldGoalResult", y="FieldGoalDistance", data=fg_analysis, inner="quart")
fg_analysis = fg_analysis[fg_analysis['FieldGoalResult'] != "Blocked"]
fg_analysis = fg_analysis[fg_analysis['PlayType'] == "Field Goal"]
sns.violinplot(x = "PlayType", y="FieldGoalDistance", hue="FieldGoalResult", data=fg_analysis, inner="quart", split = True)
pass_analysis = df[df.PlayType == 'Pass']
pass_analysis = pass_analysis[['PassOutcome','PassLength','PassLocation']]
# print(pass_analysis)
pass_analysis = pass_analysis[pass_analysis.PassLength != -1]
pa_grouped = pass_analysis.groupby(by=['PassLength'])
print(pa_grouped.count())
# pass_analysis['SuccessfulPass'] = pd.cut(df.PassOutcome,[0,1,2],2,labels=['Complete','Incomplete'])
pass_analysis.info()
# Draw a nested violinplot and split the violins for easier comparison
# sns.violinplot(x="PassLocation", y="SuccessfulPass", hue="PassLength", data=pass_analysis, split=True,
# inner="quart")
# sns.despine(left=True)
pass_info = pd.crosstab([pass_analysis['PassLength'],pass_analysis['PassLocation'] ],
pass_analysis.PassOutcome.astype(bool))
print(pass_info)
pass_info.plot(kind='bar', stacked=True)
df.RunGap.value_counts()
pass_rate = pass_info.div(pass_info.sum(1).astype(float),
axis=0) # normalize the value
# print pass_rate
pass_rate.plot(kind='barh',
stacked=True)
# Run data
run_analysis = df[df.PlayType == 'Run']
run_analysis = run_analysis[['Yards.Gained','RunGap','RunLocation']]
runlocation_violinplot = sns.violinplot(x="RunLocation", y="Yards.Gained", data=run_analysis, inner="quart")
run_analysis = run_analysis[run_analysis.RunLocation != -1]
run_analysis['RunGap'].replace(-1, 'up the middle',inplace=True)
# run_analysis['RunLocation'].replace(-1, 'no location',inplace=True)
ra_grouped = run_analysis.groupby(by=['RunGap'])
print(ra_grouped.count())
print(run_analysis.info())
sns.set(style="whitegrid", palette="muted")
# Draw a categorical scatterplot to show each observation
sns.factorplot(x="RunLocation", y="Yards.Gained", hue="RunGap", data=run_analysis)
sns.factorplot(x="RunLocation", y="Yards.Gained", hue="RunGap", data=run_analysis,kind="bar")
sns.factorplot(x="RunLocation", y="Yards.Gained", hue="RunGap", data=run_analysis,kind="violin")
#just compare left and right options
run_lr = run_analysis[(run_analysis['RunLocation'] == 'right') | (run_analysis['RunLocation'] == 'left')]
sns.factorplot(x="RunLocation", y="Yards.Gained", hue="RunGap", data=run_lr,kind="bar")
rungap_violinplot = sns.violinplot(x="RunGap", y="Yards.Gained", data=run_analysis, inner="quart")
rush_plays = df[(df.Rusher != -1)]
rush_plays_grouped = rush_plays.groupby(by=['posteam']).filter(lambda g: len(g) > 10).groupby(by=["posteam"])
yards_per_carry = rush_plays_grouped["Yards.Gained"].sum() / rush_plays_grouped["Yards.Gained"].count()
yards_per_carry.sort_values(inplace=True, ascending=False)
yards_per_carry[0:40].plot(kind='barh')
# run_analysis = df[df.PlayType == 'Run']
# run_analysis = run_analysis[['Yards.Gained','RunGap','RunLocation','Rusher','posteam']]
# run_analysis = run_analysis[run_analysis['posteam'] != -1]
# # runlocation_violinplot = sns.violinplot(x="RunLocation", y="Yards.Gained", data=run_analysis, inner="quart")
# run_analysis['RunGap'].replace(-1, 'up the middle',inplace=True)
# run_an_cleaned = run_analysis[run_analysis.RunLocation != -1]
# # run_analysis['RunLocation'].replace(-1, 'no location',inplace=True)
# run_analysis['Avg_Running_Yards'] = run_analysis[run_analysis['Yards.Gained'].mean()]
# # sns.violinplot(x="posteam", y="Yards.Gained", data=run_analysis, inner="quart")
# team_barplot = sns.barplot(x="posteam", y="Yards.Gained", data=run_analysis)
# # ra_grouped = run_analysis.groupby(by=['RunGap'])
# for item in team_barplot.get_xticklabels():
# item.set_rotation(90)
#shows average yards gained on running plays
# ************ repeat chart **************
rush_plays = df[(df.Rusher != -1)]
rush_plays_grouped = rush_plays.groupby(by=['Rusher']).filter(lambda g: len(g) > 10).groupby(by=["Rusher"])
yards_per_carry = rush_plays_grouped["Yards.Gained"].sum() / rush_plays_grouped["Yards.Gained"].count()
yards_per_carry.sort_values(inplace=True, ascending=False)
# yards_per_carry = yards_per_carry.groupby(by=['posteam'])
# yards_per_carry[0:20].plot(kind='barh')
#sns.barplot(x="Rusher",y="Yards.Gained",hue="posteam",data=rush_plays)
# yards_per_carry.info()
# sns.barplot(hue="posteam", data=yards_per_carry[0:20])
yards_per_carry.index[0]
#str(df[df.Rusher == yards_per_carry.index[0]].posteam.mode())[5:index()]
teams = [ str(df[df.Rusher == yards_per_carry.index[i]].posteam.mode()) for i in range(len(yards_per_carry)) ]
teams = [ x[5:x.index('\n')] for x in teams]
temp_df = pd.DataFrame({'yards_per_carry': yards_per_carry, 'rusher': yards_per_carry.index, 'team': teams})
ax = sns.barplot(x = "yards_per_carry", y = "rusher", hue = "team", data = temp_df[0:10], palette=sns.color_palette("Set2", 20))
## Analyzing scoring during certain times of the game
quarter_data = df[df['sp'] == 1]
sns.countplot(x='qtr',data=quarter_data)
# shows distribution of number of scores per quarter
quarter_data = df[df['sp'] == 1]
quarter_data = quarter_data[['qtr','Touchdown','sp']]
qd_grouped = quarter_data.groupby(by=['qtr'])
print(qd_grouped.count())
qd_info = pd.crosstab([quarter_data['qtr'] ],
quarter_data.Touchdown.astype(bool))
print(qd_info)
qd_info.plot(kind='bar', stacked=True)
time_score_data = df[df['sp'] == 1]
time_score_data = time_score_data[['sp','ScoreDiff','TimeSecs']]
g = sns.jointplot("TimeSecs", "ScoreDiff", data=time_score_data, kind="reg",
xlim=(3600, -900), ylim=(-40, 50), color="r", size=12)
time_score_data = time_score_data[time_score_data['ScoreDiff']>0]
sns.jointplot("TimeSecs", "ScoreDiff", data=time_score_data, kind="reg",
xlim=(3600, -900), ylim=(-40, 50), color="r", size=12)
# regression for winning team
quarter_data = df[df['sp'] == 1]
quarter_data = quarter_data[['qtr','TimeUnder']]
sns.countplot(x="qtr",hue="TimeUnder",data=quarter_data, hue_order = range(15,-1,-1))
Explanation: We can also extract the highest-completion percentage combos.
Here we take the top-10 most reliable QB-receiver pairs.
End of explanation |
8,398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quandl
Step1: The data goes all the way back to 2001 and is updated monthly.
Blaze provides us with the first 10 rows of the data for display. Just to confirm, let's just count the number of rows in the Blaze expression
Step2: Let's go plot it for fun. This data set is definitely small enough to just put right into a Pandas DataFrame | Python Code:
# import the dataset
from quantopian.interactive.data.quandl import adp_empl_sec
# Since this data is public domain and provided by Quandl for free, there is no _free version of this
# data set, as found in the premium sets. This import gets you the entirety of this data set.
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
adp_empl_sec.sort('asof_date')
Explanation: Quandl: ADP National Employment Report
In this notebook, we'll take a look at data set , available on Quantopian. This dataset spans from 2001 through the current day. It contains the value for employment levels as provided by ADP, the payroll service provider. We access this data via the API provided by Quandl. More details on this dataset can be found on Quandl's website.
Blaze
Before we dig into the data, we want to tell you about how you generally access Quantopian partner data sets. These datasets are available using the Blaze library. Blaze provides the Quantopian user with a convenient interface to access very large datasets.
Some of these sets (though not this one) are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
To learn more about using Blaze and generally accessing Quantopian partner data, clone this tutorial notebook.
With preamble in place, let's get started:
End of explanation
adp_empl_sec.count()
Explanation: The data goes all the way back to 2001 and is updated monthly.
Blaze provides us with the first 10 rows of the data for display. Just to confirm, let's just count the number of rows in the Blaze expression:
End of explanation
adp_df = odo(adp_empl_sec, pd.DataFrame)
adp_df.plot(x='asof_date', y='total_private')
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("Employment Levels")
plt.title("ADP Employment Level Data")
plt.legend().set_visible(False)
Explanation: Let's go plot it for fun. This data set is definitely small enough to just put right into a Pandas DataFrame
End of explanation |
8,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian inference tutorial
Step1: Deciding a model
The first thing once we've got some data is decide which is the model that generated the data. In this case we decide that the height of Python developers comes from a normal distribution.
A normal distribution has two parameters, the mean $\mu$ and the standard deviation $\sigma$ (or the variance $\sigma^2$ which is equivalent, as it's just the square of the standard deviation).
Deciding which model to use can be obvious in few cases, but it'll be the most complex part of the statistical inference problem in many others. Some of the obvious cases are
Step2: Following the example, we wanted to score how good are the parameters $\mu=175$ and $\sigma=5$ for our data. So far we choosen these parameters arbitrarily, but we'll choose them in a smarter way later on.
If we take the probability density function (p.d.f.) of the normal distribution and we compute for the first data point of $x$ 183, we have
Step3: This is the probability that 183 was generated by a normal distribution with mean 175 and standard deviation 5.
With scipy we can easily compute the likelihood of all values in our data
Step4: Prior
The prior is our knowledge of the parameters before we observe the data. It's probably the most subjective part of Bayesian inference, and different approaches can be used.
We can use informed priors, and try to give the model as much information as possible. Or use uninformed priors, and let the process find the parameters using mainly the data.
In our case, we can start thinking on which are the possible values for a normal distribution.
For the mean, the range is between $-\infty$ and $\infty$. But we can of course do better than this.
We're interested on the mean of Python developers height. And it's easy to see that the minimum possible height is $0$. And for the maximum, we can start by considering the maximum known human height. This is 272 cms, the maximum measured height of Robert Pershing Wadlow, born in 1918. We can be very confident that the mean of the height of Python developers is in the range $0$ to $272$. So, a first option for an uninformed prior could be all the values in this range with equal probability.
Step5: This could work, but we can do better. Just having 10 data points, the amount of information that we can learn from them is quite limited. And we may use these 10 data points to discover something we already know. That the probability of the mean height being 0 is nil, as it is the probability of the maximum ever observed height. And that the probability of a value like 175 cms is much higher than the probability of a value like 120 cms.
If we know all this before observing any data, why not use it? This is exactly what a prior is. The tricky part is defining the exact prior.
In this case, we don't know the mean of the height of Python developers, but we can check the mean of the height of the world population, which is arond 165. This doesn't need to be the value we're looking for. It's known that there are more male than female Python programmers. And male height is higher, so the value we're looking for will probably be higher. Also, height changes from country to country, and Python programmers are not equally distributed around the world. But we will use our data to try to find the value that contains all these biases. The prior is just a starting point that will help find the value faster.
So, let's use the mean of the world population as the mean of our prior, and we'll take the standard deviation of the world population, 7 cms, and we'll use the double of it. Multiplying it by 2 is arbitrary, but we'll make our prior a bit less informed. As mentioned before, choosing a prior is quite subjective.
Note that it's not necessary to use a normal distribution for the prior. We were considering a uniform distribution before. But in this case it can make sense, as we're more sure than the mean we're looking for will be close to the mean of the human population. | Python Code:
x = [183, 168, 177, 170, 175, 177, 178, 166, 174, 178]
Explanation: Bayesian inference tutorial: a hello world example
The goal is to find a statistical model with its parameters that explains the data.
So, let's assume we've got some data, regarding the height of Python developers.
This is our data:
End of explanation
import numpy
import scipy.stats
from matplotlib import pyplot
mu = 0.
sigma = 1.
x = numpy.linspace(-10., 10., 201)
likelihood = scipy.stats.norm.pdf(x, mu, sigma)
pyplot.plot(x, likelihood)
pyplot.xlabel('x')
pyplot.ylabel('Likelihood')
pyplot.title('Normal distribution with $\mu=0$ and $\sigma=1$');
Explanation: Deciding a model
The first thing once we've got some data is decide which is the model that generated the data. In this case we decide that the height of Python developers comes from a normal distribution.
A normal distribution has two parameters, the mean $\mu$ and the standard deviation $\sigma$ (or the variance $\sigma^2$ which is equivalent, as it's just the square of the standard deviation).
Deciding which model to use can be obvious in few cases, but it'll be the most complex part of the statistical inference problem in many others. Some of the obvious cases are:
* The Normal distribution when modelling natural phenomena like human heights.
* The Beta distribution when modelling probability distributions.
* The Poisson distribution when modelling the frequency of events occurring.
In many cases we will use a combination of different distributions to explain how our data was generated.
Each of these distribution has parameters, \alpha and \beta for the Beta distribution, \lambda for the Poisson, or $\mu$ and $\sigma$ for the normal distribution of our example.
The goal of inference is to find the best values for these parameters.
Evaluating a set of parameters
Before trying to find the best parameters, let's choose some arbitrary parameters, and let's evaluate them.
For example, we can choose the values $\mu=175$ and $\sigma=5$. And to evaluate them, we'll use the Bayes formula:
$$P(\theta|x) = \frac{P(x|\theta) \cdot P(\theta)}{P(x)}$$
Given a model, a normal distribution in this case, $P(\theta|x)$ is the probability that the parameters $\theta$ (which are $\mu$ and $\sigma$ in this case) given the data $x$.
The higher the probability of the parameters given the data, the better they are. So, this value is the score we will use to decide which are the best parameters $\mu$ and $\sigma$ for our data $x$, assuming data comes from a normal distribution.
Parts of the problem
To recap, we have:
* Data $x$: [183, 168, 177, 170, 175, 177, 178, 166, 174, 178]
* A model: the normal distribution
* The parameters of the model: $\mu$ and $\sigma$
And we're interested in finding the best values for $\mu$ and $\sigma$ for the data $x$, for example $\mu=175$ and $\sigma=5$.
Bayes formula
Back to Bayes formula for conditional probability:
$$P(\theta|x) = \frac{P(x|\theta) \cdot P(\theta)}{P(x)}$$
We already mentioned that $P(\theta|x)$ is the probability of the parameter values we're checking given the data $x$. And assuming our data is generated by the model we decided, the normal distribution. And this is the value we're interested in maximizing. In Bayesian terminology, $P(\theta|x)$ is known as the posterior.
The posterior is a function of three other values.
$P(x|\theta)$: the likelihood, which is the probability of obtaining the data $x$ if the parameters $\sigma$ were the values we're checking (e.g. $\mu=175$ and $\sigma=5$). And always assuming our data is generated by our model, the normal distribution.
$P(\theta)$: the prior, which is our knowledge about the parameters before seeing any data.
$P(x)$: the evidence, which is the probability of the data, not given any specific set of parameters $\sigma$, but given the model we choose, the normal distribution in the example.
Likelihood
The likelihood is the probability of obtaining the data $x$ from the choosen model (e.g. the normal distribution) and for a specific set of parameters $\theta$ (e.g. $\mu=175$ and $\sigma=5$).
It is often represented as $\mathcal{L}(\theta|x)$ (note that the order of $\theta$ and $x$ is reversed to when the probability notation is used).
In the case of a normal distribution, the formula to compute the probability given $x$ (its probability density function) is:
$$P(x|\theta) = P(x| \mu, \sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$$
If we plot it, we obtain the famous normal bell curve (we use $\mu=0$ and $\sigma=1$ in the plot):
End of explanation
import math
1. / math.sqrt(2 * math.pi * (5 **2)) * math.exp(-((183 - 175) ** 2) / (2 * (5 ** 2)))
Explanation: Following the example, we wanted to score how good are the parameters $\mu=175$ and $\sigma=5$ for our data. So far we choosen these parameters arbitrarily, but we'll choose them in a smarter way later on.
If we take the probability density function (p.d.f.) of the normal distribution and we compute for the first data point of $x$ 183, we have:
$$P(x| \mu, \sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$$
where $\mu=175$, $\sigma=5$ and $x=183$, so:
$$P(x=183| \mu=175, \sigma=5) = \frac{1}{\sqrt{2 \cdot \pi \cdot 5^2}} \cdot e^{-\frac{(183 - 175)^2}{2 \cdot 5^2}}$$
If we do the math:
End of explanation
import scipy.stats
mu = 175
sigma = 5
x = [183, 168, 177, 170, 175, 177, 178, 166, 174, 178]
scipy.stats.norm.pdf(x, mu, sigma)
Explanation: This is the probability that 183 was generated by a normal distribution with mean 175 and standard deviation 5.
With scipy we can easily compute the likelihood of all values in our data:
End of explanation
import numpy
import scipy.stats
from matplotlib import pyplot
mean_height = numpy.linspace(0, 272, 273)
probability = scipy.stats.uniform.pdf(mean_height, 0, 272)
pyplot.plot(mean_height, probability)
pyplot.xlabel('Mean height')
pyplot.ylabel('Probability')
pyplot.title('Uninformed prior for Python developers height');
Explanation: Prior
The prior is our knowledge of the parameters before we observe the data. It's probably the most subjective part of Bayesian inference, and different approaches can be used.
We can use informed priors, and try to give the model as much information as possible. Or use uninformed priors, and let the process find the parameters using mainly the data.
In our case, we can start thinking on which are the possible values for a normal distribution.
For the mean, the range is between $-\infty$ and $\infty$. But we can of course do better than this.
We're interested on the mean of Python developers height. And it's easy to see that the minimum possible height is $0$. And for the maximum, we can start by considering the maximum known human height. This is 272 cms, the maximum measured height of Robert Pershing Wadlow, born in 1918. We can be very confident that the mean of the height of Python developers is in the range $0$ to $272$. So, a first option for an uninformed prior could be all the values in this range with equal probability.
End of explanation
import numpy
import scipy.stats
from matplotlib import pyplot
world_height_mean = 165
world_height_standard_deviation = 7
mean_height = numpy.linspace(0, 272, 273)
probability = scipy.stats.norm.pdf(mean_height, world_height_mean, world_height_standard_deviation * 2)
pyplot.plot(mean_height, probability)
pyplot.xlabel('Mean height')
pyplot.ylabel('Probability')
pyplot.title('Informed prior for Python developers height');
Explanation: This could work, but we can do better. Just having 10 data points, the amount of information that we can learn from them is quite limited. And we may use these 10 data points to discover something we already know. That the probability of the mean height being 0 is nil, as it is the probability of the maximum ever observed height. And that the probability of a value like 175 cms is much higher than the probability of a value like 120 cms.
If we know all this before observing any data, why not use it? This is exactly what a prior is. The tricky part is defining the exact prior.
In this case, we don't know the mean of the height of Python developers, but we can check the mean of the height of the world population, which is arond 165. This doesn't need to be the value we're looking for. It's known that there are more male than female Python programmers. And male height is higher, so the value we're looking for will probably be higher. Also, height changes from country to country, and Python programmers are not equally distributed around the world. But we will use our data to try to find the value that contains all these biases. The prior is just a starting point that will help find the value faster.
So, let's use the mean of the world population as the mean of our prior, and we'll take the standard deviation of the world population, 7 cms, and we'll use the double of it. Multiplying it by 2 is arbitrary, but we'll make our prior a bit less informed. As mentioned before, choosing a prior is quite subjective.
Note that it's not necessary to use a normal distribution for the prior. We were considering a uniform distribution before. But in this case it can make sense, as we're more sure than the mean we're looking for will be close to the mean of the human population.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.