Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
8,400 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NIWA
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:30
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
8,401 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DSGRN Query Functions
Step1: We show here the network being considered in this example
Step2: Query Overview
In order to perform queries on the database sometimes preprocessing is necessary. In order to give a uniform approach to this we have adopted a design where each query corresponds to a python class whose name ends with the suffix Query. Each class has a constructor (i.e. __init__ method) which accepts some arguments to indicate parameters of the query (e.g. which database).
We currently have the following queries
Step3: Evaluate the query on a few Morse Graph Indices
Step4: How many matches for each type of query?
Step5: Print the list of Morse graph indices which satisfy the monostable query.
Step6: Directly verify that all returns matches satisfy the corresponding query
Step7: SingleGeneQuery
Our interest is in fixing all combinatorial parameters except for the logic parameter corresponding to a single node and considering the set of parameters corresponding to this choice. Due to the factorization of the parameter graph, this set of parameters is isomorphic to the factor graph associated to the node of interest. In order to handle repeated queries efficiently, it is necessary to prepare a table which reorders information so that it is I/O efficient for algorithms to retrieve. The following does this
Step8: For a single gene query, the queries are graphs isomorphic to the factor graph, and the number of such queries corresponds to the number of "reduced parameter indices". This will be explained in more depth shortly. To help explain this we first examine the following computation
Step9: Importantly, this factorization corresponds to a way to convert a parameter index (an integer) into a pair of integers, one in [0,50) and the other in [0,108), which we call the gene parameter index and the reduced parameter index. The manner in which this is done is technical and has to do with how the integers encode combinatorial parameters using a mixed-radix system. Roughly speaking, the gene parameter index is obtained by extracting a digit from the mixed-radix representation of the parameter index, and what remains after removing the digit entirely (not just setting it to 0) is the reduced parameter index. This process can be reversed as well, so both the original parameter index and the (GeneParameterIndex, ReducedParameterIndex) pair are equivalent representations. What the prepare step we just accomplished did was create a table with the database's information which sorted the information by ReducedParameterIndex first and GeneParameterIndex second. (The original database sorts by ParameterIndex.)
Performing a single-gene query
Now we perform a query. The result which the query returns is a graph. This graph contains data which has the raw information obtained from the query in the form of a python dictionary (i,e, {key1
Step10: The query above returns the "MorseGraphIndex" which can be used with the database to retrieve the Morse graph. However we might only want to know if the Morse graph has a certain property. For example, we might want to know if it has 1 minimal node, or multiple (2 or more) minimal nodes. We create a function which takes a "MorseGraphIndex" and returns True if the associated Morse graph has multiple minimal nodes and False otherwise.
Visualizing the query
The above information describes a partially ordered set. In this poset each node corresponds to a parameter index. Each parameter index corresponds to a pair of sub-indices called the "GeneParameterIndex" and the "ReducedParameterIndex" which are the integers resulting from splitting out the "digit" corresponding to the logic parameter of the gene of interest. The "GeneParameterIndex" corresponds directly to the logic parameter of the gene of interest which can also be represented with a "HexCode". Using the hex code representation we learn adjacency information (due to the GPG=CPG theorem). Since our query gives us all of this information, the query automatically determines this information and can display itself as a graph of the labelled poset corresponding to the query. It also comes equipped with some methods for checking graph properties (as we demonstrate later). The nodes themselves are labelled according to their "ParameterIndex" and "MorseGraphIndex"
Step11: Features of the graph query
In addition to being a graph there are other attributes of the query that are of use. In particular,
The graph is as follows
Step12: Testing the query result
The above query indicates that some of the parameters associated with the query had multistability and some did not. In order to make sure everything is working properly, let's take an example of each class and draw the Morse graph. For instance, parameter index 2199 has Morse Graph 18, and is colored blue, which is supposed to correspond to a lack of multistability. We check this and find it is indeed the case
Step13: Similarly, our query result indicates parameter index 2180 corresponds to Morse Graph 84, which is colored red, indicated it does exhibit multistability. We check this as well
Step14: SingleFixedPointQuery, DoubleFixedPointQuery
We have the capability to retrieve parameter indices for which a FP occurs in a certain location. We call these locations "domains". A domain can be indicated by which "bin" it corresponds to along each dimension. A bin is an interval bounded by either (a) consecutive thresholds in a given dimension, (b) between 0 and the first threshold, or (c) bounded below by the last threshold and unbounded above. In particular, for each dimension the number of thresholds is equal to the number of out-edges of the corresponding network node. If there are m such thresholds then there are m+1 locations (bins) along this dimension which we label 0, 1, 2, ..., m. This allows us to describe the location of a domain by listing bin numbers for each dimension.
We can consider many domains at once which are grouped together in rectangular prisms. To represent these, we create a dictionary object where for each variable we product a key value pair where the key is the variable name and the value is a list of two integers [a,b] such that we mean that the variable can only occur in the bins between a and b (inclusive). If we omit a variable from the dictionary it is allowed to be in any bin. Also, if a=b we can simply write "a" instead of "[a,a]". For example
Step15: Using these "bounds" variables to represent groups of domains, we can use query functions which ask for the collection of morse graphs which have an "FP" node labelled with a domain in those bounds. For example, to find the set of Morse Graph indices corresponding to fixed points in the region specified by "bounds110"
Step16: Find set of Morse Graph indices corresponding to fixed points in the region specified by "bounds210"
Step17: Find set of Morse Graph indices corresponding to fixed points in the region specified by "bounds311"
Step18: Find the set of Morse Graph indices with both a fixed point in 1,1,0 and a fixed point in 3,1,1
Step19: Queries on Graph Properties
It is possible to make queries about graph properties. If we have developed a set of queries about the vertices, we can ask several kinds of questions
Step20: Q1. Is the minimal node red?
Step21: Q2. Is the maximal node yellow?
Step22: Q3(a). Is there an essential green node?
Step23: List all essential green nodes
Step24: Q3(b). Does every path from min to max pass through green?
Step25: No, they don't. What percentage of them pass through green?
Step26: Q3(b)'. Does every path from min to max pass through a blue vertex?
Step27: Which means there are zero paths from minimum to maximum in the subgraph where we take out the blue vertices, correct?
Step28: Q3(c). Is there an intermediate (neither max nor min) green node?
Step29: Visualizing the Essential parameter nodes
Step30: InducibilityQuery
Step31: HysteresisQuery | Python Code:
from DSGRN import *
database = Database("querytest.db")
database.parametergraph.dimension()
Explanation: DSGRN Query Functions
End of explanation
database
print(database.network.specification())
Explanation: We show here the network being considered in this example:
End of explanation
monostable_query_object = MonostableQuery(database)
bistable_query_object = BistableQuery(database)
multistable_query_object = MultistableQuery(database)
Explanation: Query Overview
In order to perform queries on the database sometimes preprocessing is necessary. In order to give a uniform approach to this we have adopted a design where each query corresponds to a python class whose name ends with the suffix Query. Each class has a constructor (i.e. __init__ method) which accepts some arguments to indicate parameters of the query (e.g. which database).
We currently have the following queries:
| Name | Query Parameters | Query Input | Query Output |
| ---- | ----------- | ------------ | --- |
| MonostableQuery | Database | Morse Graph Index | True/False |
| BistableQuery | Database | Morse Graph Index | True/False |
| MultistableQuery | Database | Morse Graph Index | True/False |
| SingleGeneQuery | Database, Name of Network Node | Reduced Parameter Index | Annotated Factor Graph |
| SingleFixedPointQuery | Database, Domain Bounds | Morse Graph Index | True/False |
| DoubleFixedPointQuery | Database, pair of Domain Bounds | Morse Graph Index | True/False |
| MonostableFixedPointQuery | Database, Domain Bounds | Morse Graph Index | True/False |
| InducibilityQuery | Database, Name of Network Node, pair of Domain Bounds | Reduced Parameter Index | Triple of True/False |
| HysteresisQuery | Database, Name of Network Node, pair of Domain Bounds | Reduced Parameter Index | True/False |
When the query object is constructed, it is passed the required parameters and any preprocessing that is required to support the query is done. In some cases the preprocessing is trivial, and in other cases it may be more extensive. After the object is constructed, it can be used to perform queries. This is accomplished by invoking the objects __call__ operator (i.e. treating the object as a function). The call operator receives the query input and returns the query output. For example:
single_gene_query = SingleGeneQuery(database, "X1")
graph = single_gene_query(43)
In the first line, the query object is created with the query parameters database and "X1". This results in computation being done to organize a table in the database to quickly support "Single Gene Queries". The created object single_gene_query has a method __call__ which allows it to be called as a function in order to produce query results. The input of the __call__ method is a "reduced parameter index" and what is returned will be an annotated graph structure specific to what this query does.
In many cases the input to the query is a Morse Graph Index and the output is a boolean value which indicates whether or not the morse graph index is in a precomputed set of matches. These query classes typically also support another method matches which simply returns the set of matches. This allows the following code:
set_of_matches = SingleFixedPointQuery(database, domain_bounds).matches()
In this code, a query object is created, the matches method is called and returns the set of matches, but no reference to the query object is kept. When using this paradigm one should be careful not to unnecessarily create the same query multiple times, or else the same preprocessing step would be repeated.
MonostableQuery, BistableQuery, and MultistableQuery
End of explanation
monostable_query_object(0)
monostable_query_object(1)
Explanation: Evaluate the query on a few Morse Graph Indices:
End of explanation
print([len(monostable_query_object.matches()), len(bistable_query_object.matches()), len(multistable_query_object.matches())])
Explanation: How many matches for each type of query?
End of explanation
print(monostable_query_object.matches())
Explanation: Print the list of Morse graph indices which satisfy the monostable query.
End of explanation
all( monostable_query_object(mgi) for mgi in monostable_query_object.matches() )
database.DrawMorseGraph(131)
Explanation: Directly verify that all returns matches satisfy the corresponding query:
End of explanation
single_gene_query = SingleGeneQuery(database, "X1")
Explanation: SingleGeneQuery
Our interest is in fixing all combinatorial parameters except for the logic parameter corresponding to a single node and considering the set of parameters corresponding to this choice. Due to the factorization of the parameter graph, this set of parameters is isomorphic to the factor graph associated to the node of interest. In order to handle repeated queries efficiently, it is necessary to prepare a table which reorders information so that it is I/O efficient for algorithms to retrieve. The following does this:
End of explanation
N = single_gene_query.number_of_gene_parameters()
M = single_gene_query.number_of_reduced_parameters()
L = database.parametergraph.size()
print([N, M, N*M, L])
Explanation: For a single gene query, the queries are graphs isomorphic to the factor graph, and the number of such queries corresponds to the number of "reduced parameter indices". This will be explained in more depth shortly. To help explain this we first examine the following computation:
End of explanation
graph = single_gene_query(43) # 43 is a "reduced parameter index"
graph.data
Explanation: Importantly, this factorization corresponds to a way to convert a parameter index (an integer) into a pair of integers, one in [0,50) and the other in [0,108), which we call the gene parameter index and the reduced parameter index. The manner in which this is done is technical and has to do with how the integers encode combinatorial parameters using a mixed-radix system. Roughly speaking, the gene parameter index is obtained by extracting a digit from the mixed-radix representation of the parameter index, and what remains after removing the digit entirely (not just setting it to 0) is the reduced parameter index. This process can be reversed as well, so both the original parameter index and the (GeneParameterIndex, ReducedParameterIndex) pair are equivalent representations. What the prepare step we just accomplished did was create a table with the database's information which sorted the information by ReducedParameterIndex first and GeneParameterIndex second. (The original database sorts by ParameterIndex.)
Performing a single-gene query
Now we perform a query. The result which the query returns is a graph. This graph contains data which has the raw information obtained from the query in the form of a python dictionary (i,e, {key1:value1, key2:value2,...}) where the keys are gene parameter indices, and the values are tuples (hexcode, parameter index, morsegraphindex)
End of explanation
graph
Explanation: The query above returns the "MorseGraphIndex" which can be used with the database to retrieve the Morse graph. However we might only want to know if the Morse graph has a certain property. For example, we might want to know if it has 1 minimal node, or multiple (2 or more) minimal nodes. We create a function which takes a "MorseGraphIndex" and returns True if the associated Morse graph has multiple minimal nodes and False otherwise.
Visualizing the query
The above information describes a partially ordered set. In this poset each node corresponds to a parameter index. Each parameter index corresponds to a pair of sub-indices called the "GeneParameterIndex" and the "ReducedParameterIndex" which are the integers resulting from splitting out the "digit" corresponding to the logic parameter of the gene of interest. The "GeneParameterIndex" corresponds directly to the logic parameter of the gene of interest which can also be represented with a "HexCode". Using the hex code representation we learn adjacency information (due to the GPG=CPG theorem). Since our query gives us all of this information, the query automatically determines this information and can display itself as a graph of the labelled poset corresponding to the query. It also comes equipped with some methods for checking graph properties (as we demonstrate later). The nodes themselves are labelled according to their "ParameterIndex" and "MorseGraphIndex":
End of explanation
# Create a function which tells us if each vertex has the multistable property:
is_multistable = MultistableQuery(database)
# Change the coloring method of the graph to check for multistability:
graph.color = lambda v : "red" if is_multistable(v) else "blue"
# Display the graph:
graph
Explanation: Features of the graph query
In addition to being a graph there are other attributes of the query that are of use. In particular,
The graph is as follows:
The vertices of the graph (graph.vertices) are named according to Gene Parameter Index (gpi).
graph.edges contains the directed edge p -> q iff p < q and the associated logic parameters are adjacent.
The graph is (by default) labelled with pairs (Parameter index, Morse graph index). The default graph labelling can be changed by replacing the label attribute with a new function. A label function takes the vertex name (i.e. gpi) as input and returns a label string.
The graph is (by default) colored blue. The default graph coloring can be changed by replacing teh color attribute with a new function. A color function takes the vertex name as an input and returns a new color string.
In addition the following extra structures are provided:
graph.data is a dictionary from gene parameter index to (hex code, parameter index, morse graph index)
graph.mgi is a function which accepts a gpi and returns the associated Morse graph idnex
graph.num_inputs is the number of network edges which are inputs to the gene associated with the query
graph.num_outputsis the number of network edges which are outputs to the gene associated with the query
graph.essential is a boolean-valued function which determines if each vertex corresponds to an essential parameter node
Changing the color to inspect node properties
In the above graph all the nodes have the same color. We can change this so that the color of the nodes reflects some property of our choosing. As an example, we might ask if a node has a Morse graph with multistability -- if so, we can color the node red, otherwise we can color the node blue. This is done as follows:
End of explanation
database.DrawMorseGraph(18)
Explanation: Testing the query result
The above query indicates that some of the parameters associated with the query had multistability and some did not. In order to make sure everything is working properly, let's take an example of each class and draw the Morse graph. For instance, parameter index 2199 has Morse Graph 18, and is colored blue, which is supposed to correspond to a lack of multistability. We check this and find it is indeed the case:
End of explanation
database.DrawMorseGraph(84)
Explanation: Similarly, our query result indicates parameter index 2180 corresponds to Morse Graph 84, which is colored red, indicated it does exhibit multistability. We check this as well:
End of explanation
bounds110 = {"X1":1,"X2":1,"X3":0} # Domain 1,1,0
bounds210 = {"X1":[2,2],"X2":[1,1],"X3":[0,1]} # Domain 2,1,0 or Domain 2,1,1
bounds311 = {"X1":[3,3],"X2":[1,1],"X3":[1,1]} # Domain 3,1,1
Explanation: SingleFixedPointQuery, DoubleFixedPointQuery
We have the capability to retrieve parameter indices for which a FP occurs in a certain location. We call these locations "domains". A domain can be indicated by which "bin" it corresponds to along each dimension. A bin is an interval bounded by either (a) consecutive thresholds in a given dimension, (b) between 0 and the first threshold, or (c) bounded below by the last threshold and unbounded above. In particular, for each dimension the number of thresholds is equal to the number of out-edges of the corresponding network node. If there are m such thresholds then there are m+1 locations (bins) along this dimension which we label 0, 1, 2, ..., m. This allows us to describe the location of a domain by listing bin numbers for each dimension.
We can consider many domains at once which are grouped together in rectangular prisms. To represent these, we create a dictionary object where for each variable we product a key value pair where the key is the variable name and the value is a list of two integers [a,b] such that we mean that the variable can only occur in the bins between a and b (inclusive). If we omit a variable from the dictionary it is allowed to be in any bin. Also, if a=b we can simply write "a" instead of "[a,a]". For example:
End of explanation
matches110 = SingleFixedPointQuery(database, bounds110).matches()
Explanation: Using these "bounds" variables to represent groups of domains, we can use query functions which ask for the collection of morse graphs which have an "FP" node labelled with a domain in those bounds. For example, to find the set of Morse Graph indices corresponding to fixed points in the region specified by "bounds110":
End of explanation
matches210 = SingleFixedPointQuery(database, bounds210).matches()
Explanation: Find set of Morse Graph indices corresponding to fixed points in the region specified by "bounds210":
End of explanation
matches311 = SingleFixedPointQuery(database, bounds311).matches()
Explanation: Find set of Morse Graph indices corresponding to fixed points in the region specified by "bounds311":
End of explanation
matches_both = DoubleFixedPointQuery(database, bounds110,bounds311).matches()
len(matches110), len(matches210), len(matches311), len(matches_both)
matches_both
Explanation: Find the set of Morse Graph indices with both a fixed point in 1,1,0 and a fixed point in 3,1,1:
End of explanation
graph.color = lambda v : "green" if graph.mgi(v) in matches_both else ("blue" if graph.mgi(v) in matches210 else ( "yellow" if graph.mgi(v) in matches311 else "red"))
graph
minimum_gpi = 0
maximum_gpi = len(graph.vertices) - 1
Explanation: Queries on Graph Properties
It is possible to make queries about graph properties. If we have developed a set of queries about the vertices, we can ask several kinds of questions:
1) Does the minimal node have a certain property?
2) Does the maximal node have a certain property?
3) Must every path from the minimal node to the maximal node pass through a node with a certain property?
We can even ask questions about how many paths from the minimal node to the maximal node have a certain property (or the fraction of paths).
To help visualize the examples we color the graph "green", "blue", "red", and "yellow" according to each vertex's status with regard to the FP location query examples above. Specifically:
End of explanation
graph.color(minimum_gpi) == "red"
Explanation: Q1. Is the minimal node red?
End of explanation
graph.color(maximum_gpi) == "yellow"
Explanation: Q2. Is the maximal node yellow?
End of explanation
any( graph.essential(v) and graph.color(v) == "green" for v in graph.vertices)
Explanation: Q3(a). Is there an essential green node?
End of explanation
[v for v in graph.vertices if graph.essential(v) and graph.color(v) == "green"]
Explanation: List all essential green nodes:
End of explanation
predicate = lambda v : graph.color(v) == "green"
graph.unavoidable(minimum_gpi,maximum_gpi,predicate)
Explanation: Q3(b). Does every path from min to max pass through green?
End of explanation
subgraph = graph.subgraph(lambda v : not predicate(v))
number_missing_green = subgraph.numberOfPaths(minimum_gpi,maximum_gpi)
total_number = graph.numberOfPaths(minimum_gpi,maximum_gpi)
print str((1.0 - float(number_missing_green)/float(total_number))*100.0) + "%"
Explanation: No, they don't. What percentage of them pass through green?
End of explanation
predicate = lambda v : graph.color(v) == "blue"
graph.unavoidable(minimum_gpi,maximum_gpi,predicate)
Explanation: Q3(b)'. Does every path from min to max pass through a blue vertex?
End of explanation
subgraph = graph.subgraph(lambda v : graph.color(v) != "blue")
if subgraph.numberOfPaths(minimum_gpi,maximum_gpi) == 0: print("Correct.")
Explanation: Which means there are zero paths from minimum to maximum in the subgraph where we take out the blue vertices, correct?
End of explanation
any( v != minimum_gpi and v != maximum_gpi and graph.color(v) == "green" for v in graph.vertices)
Explanation: Q3(c). Is there an intermediate (neither max nor min) green node?
End of explanation
graph.color = lambda v : "red" if graph.essential(v) else "green"
graph
Explanation: Visualizing the Essential parameter nodes:
End of explanation
inducibility_query_object = InducibilityQuery(database, "X1", bounds110, bounds311)
reduced_parameters = range(0, inducibility_query_object.GeneQuery.number_of_reduced_parameters())
[ inducibility_query_object(rpi) for rpi in reduced_parameters ][0:10]
Explanation: InducibilityQuery
End of explanation
hysteresis_query_object = HysteresisQuery(database, "X1", bounds110, bounds311)
reduced_parameters = range(0, hysteresis_query_object.GeneQuery.number_of_reduced_parameters())
[ hysteresis_query_object(rpi) for rpi in reduced_parameters ][0:10]
Explanation: HysteresisQuery
End of explanation |
8,402 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This tutorial introduces the basic features for simulating titratable systems via the constant pH method.
The constant pH method is one of the methods implemented for simulating systems with chemical reactions within the Reaction Ensemble module. It is a Monte Carlo method designed to model an acid-base ionization reaction at a given (fixed) value of solution pH.
We will consider a homogeneous aqueous solution of a titratable acidic species $\mathrm{HA}$ that can dissociate in a reaction, that is characterized by the equilibrium constant $\mathrm{p}K_A=-\log_{10} K_A$
$$\mathrm{HA} \Leftrightarrow \mathrm{A}^- + \mathrm{H}^+$$
If $N_0 = N_{\mathrm{HA}} + N_{\mathrm{A}^-}$ is the number of titratable groups in solution, then we define the degree of dissociation $\alpha$ as
Step1: After defining the simulation parameters, we set up the system that we want to simulate. It is a polyelectrolyte chain with some added salt that is used to control the ionic strength of the solution. For the first run, we set up the system without any steric repulsion and without electrostatic interactions. In the next runs, we will add the steric repulsion and electrostatic interactions to observe their effect on the ionization.
Step2: After setting creating the particles we initialize the reaction ensemble by setting the temperature, exclusion radius and seed of the random number generator. We set the temperature to unity, that determines that our reduced unit of energy will be $\varepsilon=1k_{\mathrm{B}}T$. In an interacting system the exclusion radius ensures that particle insertions too close to other particles are not attempted. Such insertions would make the subsequent Langevin dynamics integration unstable. If the particles are not interacting, we can set the exclusion radius to $0.0$. Otherwise, $1.0$ is a good value. We set the seed to a constant value to ensure reproducible results.
Step3: The next step is to define the reaction system. The order in which species are written in the lists of reactants and products is very important for ESPResSo. When a reaction move is performed, identity of the first species in the list of reactants is changed to the first species in the list of products, the second reactant species is changed to the second product species, and so on. If the reactant list has more species than the product list, then excess reactant species are deleted from the system. If the product list has more species than the reactant list, then product the excess product species are created and randomly placed inside the simulation box. This convention is especially important if some of the species belong to a chain-like molecule, and cannot be placed at an arbitrary position.
In the example below, the order of reactants and products ensures that identity of $\mathrm{HA}$ is changed to $\mathrm{A^{-}}$ and vice versa, while $\mathrm{H^{+}}$ is inserted/deleted in the reaction move. Reversing the order of products in our reaction (i.e. from product_types=[TYPE_B, TYPE_A] to product_types=[TYPE_A, TYPE_B]), would result in a reaction move, where the identity HA would be changed to $\mathrm{H^{+}}$, while $\mathrm{A^{-}}$ would be inserted/deleted at a random position in the box. We also assign charges to each type because the charge will play an important role later, in simulations with electrostatic interactions.
Step4: Next, we perform simulations at different pH values. The system must be equilibrated at each pH before taking samples.
Calling RE.reaction(X) attempts in total X reactions (in both backward and forward direction).
Step5: Results
Finally we plot our results and compare them to the analytical results obtained from the Henderson-Hasselbalch equation.
Statistical Uncertainty
The molecular simulation produces a sequence of snapshots of the system, that
constitute a Markov chain. It is a sequence of realizations of a random process, where
the next value in the sequence depends on the preceding one. Therefore,
the subsequent values are correlated. To estimate statistical error of the averages
determined in the simulation, one needs to correct for the correlations.
Here, we will use a rudimentary way of correcting for correlations, termed the binning method.
We refer the reader to specialized literature for a more sophisticated discussion, for example Janke2002. The general idea is to group a long sequence of correlated values into a rather small number of blocks, and compute an average per each block. If the blocks are big enough, they
can be considered uncorrelated, and one can apply the formula for standard error of the mean of uncorrelated values. If the number of blocks is small, then they are uncorrelated but the obtained error estimates has a high uncertainty. If the number of blocks is high, then they are too short to be uncorrelated, and the obtained error estimates are systematically lower than the correct value. Therefore, the method works well only if the sample size is much greater than the autocorrelation time, so that it can be divided into a sufficient number of mutually uncorrelated blocks.
In the example below, we use a fixed number of 16 blocks to obtain the error estimates.
Step6: The simulation results for the non-interacting case very well compare with the analytical solution of Henderson-Hasselbalch equation. There are only minor deviations, and the estimated errors are small too. This situation will change when we introduce interactions.
It is useful to check whether the estimated errors are consistent with the assumptions that were used to obtain them. To do this, we follow [Janke2000] to estimate the number of uncorrelated samples per block, and check whether each block contains a sufficient number of uncorrelated samples (we choose 10 uncorrelated samples per block as the threshold value).
Intentionally, we make our simulation slightly too short, so that it does not produce enough uncorrelated samples. We encourage the reader to vary the number of blocks or the number of samples to see how the estimated error changes with these parameters.
Step7: To look in more detail at the statistical accuracy, it is useful to plot the deviations from the analytical result. This provides another way to check the consistency of error estimates. About 68% of the results should be within one error bar from the analytical result, whereas about 95% of the results should be within two times the error bar. Indeed, if you plot the deviations by running the script below, you should observe that most of the results are within one error bar from the analytical solution, a smaller fraction of the results is slightly further than one error bar, and one or two might be about two error bars apart. Again, this situation will change when we introduce interactions because the ionization of the interacting system should deviate from the Henderson-Hasselbalch equation.
Step8: The Neutralizing Ion $\mathrm{B^+}$
Up to now we did not discuss the chemical nature the neutralizer $\mathrm{B^+}$. The added salt is not relevant in this context, therefore we omit it from the discussion. The simplest case to consider is what happens if you add the acidic polymer to pure water ($\mathrm{pH} = 7$). Some of the acid groups dissociate and release $\mathrm{H^+}$ ions into the solution. The pH decreases to a value that depends on $\mathrm{p}K_{\mathrm{A}}$ and on the concentration of ionizable groups. Now, three ionic species are present in the solution
Step9: The plot shows that at intermediate pH the concentration of $\mathrm{B^+}$ ions is approximately equal to the concentration of $\mathrm{M^+}$ ions. Only at one specific $\mathrm{pH}$ the concentration of $\mathrm{B^+}$ ions is equal to the concentration of $\mathrm{M^+}$ ions. This is the pH one obtains when dissolving the weak acid $\mathrm{A}$ in pure water.
In an ideal system, the ions missing in the simulation have no effect on the ionization degree. In an interacting system, the presence of ions in the box affects the properties of other parts of the system. Therefore, in an interacting system this discrepancy is harmless only at intermediate pH. The effect of the small ions on the rest of the system can be estimated from the overall the ionic strength.
$$ I = \frac{1}{2}\sum_i c_i z_i^2 $$ | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import scipy.constants # physical constants
import espressomd
import pint # module for working with units and dimensions
from espressomd import electrostatics, polymer, reaction_ensemble
from espressomd.interactions import HarmonicBond
ureg = pint.UnitRegistry()
# sigma=0.355 nm is a commonly used particle size in coarse-grained simulations
ureg.define('sigma = 0.355 * nm = sig')
sigma = 1.0 * ureg.sigma # variable that has the value and dimension of one sigma
# N_A is the numerical value of Avogadro constant in units 1/mole
N_A = scipy.constants.N_A/ureg.mole
Bjerrum = 0.715 * ureg.nanometer # Bjerrum length at 300K
# define that concentration is a quantity that must have a value and a unit
concentration = ureg.Quantity
# System parameters
#############################################################
# 0.01 mol/L is a reasonable concentration that could be used in experiments
c_acid = concentration(1e-3, 'mol/L')
# Using the constant-pH method is safe if Ionic_strength > max(10**(-pH), 10**(-pOH) ) and C_salt > C_acid
# additional salt to control the ionic strength
c_salt = concentration(2*c_acid)
# In the ideal system, concentration is arbitrary (see Henderson-Hasselbalch equation)
# but it is important in the interacting system
N_acid = 20 # number of titratable units in the box
PROB_REACTION = 0.5 # select the reaction move with 50% probability
# probability of the reaction is adjustable parameter of the method that affects the speed of convergence
# Simulate an interacting system with steric repulsion (Warning: it will be slower than without WCA!)
USE_WCA = False
# Simulate an interacting system with electrostatics (Warning: it will be very slow!)
USE_ELECTROSTATICS = False
# particle types of different species
TYPE_HA = 0
TYPE_A = 1
TYPE_B = 2
TYPE_Na = 3
TYPE_Cl = 4
q_HA = 0
q_A = -1
q_B = +1
q_Na = +1
q_Cl = -1
# acidity constant
pK = 4.88
K = 10**(-pK)
offset = 2.0 # range of pH values to be used pK +/- offset
num_pHs = 15 # number of pH values
pKw = 14.0 # autoprotolysis constant of water
# dependent parameters
Box_V = (N_acid/N_A/c_acid)
Box_L = np.cbrt(Box_V.to('m**3'))*ureg('m')
# we shall often need the numerical value of box length in sigma
Box_L_in_sigma = Box_L.to('sigma').magnitude
# unfortunately, pint module cannot handle cube root of m**3, so we need to explicitly set the unit
N_salt = int(c_salt*Box_V*N_A) # number of salt ion pairs in the box
# print the values of dependent parameters to check for possible rounding errors
print("N_salt: {0:.1f}, N_acid: {1:.1f}, N_salt/N_acid: {2:.7f}, c_salt/c_acid: {3:.7f}".format(
N_salt, N_acid, 1.0*N_salt/N_acid, c_salt/c_acid))
n_blocks = 16 # number of block to be used in data analysis
desired_block_size = 10 # desired number of samples per block
# number of reaction samples per each pH value
num_samples = int(n_blocks * desired_block_size / PROB_REACTION)
pHmin = pK-offset # lowest pH value to be used
pHmax = pK+offset # highest pH value to be used
pHs = np.linspace(pHmin, pHmax, num_pHs) # list of pH values
# Initialize the ESPResSo system
##############################################
system = espressomd.System(box_l=[Box_L_in_sigma] * 3)
system.time_step = 0.01
system.cell_system.skin = 0.4
system.thermostat.set_langevin(kT=1.0, gamma=1.0, seed=7)
np.random.seed(seed=10) # initialize the random number generator in numpy
Explanation: Introduction
This tutorial introduces the basic features for simulating titratable systems via the constant pH method.
The constant pH method is one of the methods implemented for simulating systems with chemical reactions within the Reaction Ensemble module. It is a Monte Carlo method designed to model an acid-base ionization reaction at a given (fixed) value of solution pH.
We will consider a homogeneous aqueous solution of a titratable acidic species $\mathrm{HA}$ that can dissociate in a reaction, that is characterized by the equilibrium constant $\mathrm{p}K_A=-\log_{10} K_A$
$$\mathrm{HA} \Leftrightarrow \mathrm{A}^- + \mathrm{H}^+$$
If $N_0 = N_{\mathrm{HA}} + N_{\mathrm{A}^-}$ is the number of titratable groups in solution, then we define the degree of dissociation $\alpha$ as:
$$\alpha = \dfrac{N_{\mathrm{A}^-}}{N_0}.$$
This is one of the key quantities that can be used to describe the acid-base equilibrium. Usually, the goal of the simulation is to predict the value of $\alpha$ under given conditions in a complex system with interactions.
The Chemical Equilibrium and Reaction Constant
The equilibrium reaction constant describes the chemical equilibrium of a given reaction. The values of equilibrium constants for various reactions can be found in tables. For the acid-base ionization reaction, the equilibrium constant is conventionally called the acidity constant, and it is defined as
\begin{equation}
K_A = \frac{a_{\mathrm{H}^+} a_{\mathrm{A}^-} } {a_{\mathrm{HA}}}
\end{equation}
where $a_i$ is the activity of species $i$. It is related to the chemical potential $\mu_i$ and to the concentration $c_i$
\begin{equation}
\mu_i = \mu_i^\mathrm{ref} + k_{\mathrm{B}}T \ln a_i
\,,\qquad
a_i = \frac{c_i \gamma_i}{c^{\ominus}}\,,
\end{equation}
where $\gamma_i$ is the activity coefficient, and $c^{\ominus}$ is the (arbitrary) reference concentration, often chosen to be the standard concentration, $c^{\ominus} = 1\,\mathrm{mol/L}$, and $\mu_i^\mathrm{ref}$ is the reference chemical potential.
Note that $K$ is a dimensionless quantity but its numerical value depends on the choice of $c^0$.
For an ideal system, $\gamma_i=1$ by definition, whereas for an interacting system $\gamma_i$ is a non-trivial function of the interactions. For an ideal system we can rewrite $K$ in terms of equilibrium concentrations
\begin{equation}
K_A \overset{\mathrm{ideal}}{=} \frac{c_{\mathrm{H}^+} c_{\mathrm{A}^-} } {c_{\mathrm{HA}} c^{\ominus}}
\end{equation}
The ionization degree can also be expressed via the ratio of concentrations:
\begin{equation}
\alpha
= \frac{N_{\mathrm{A}^-}}{N_0}
= \frac{N_{\mathrm{A}^-}}{N_{\mathrm{HA}} + N_{\mathrm{A}^-}}
= \frac{c_{\mathrm{A}^-}}{c_{\mathrm{HA}}+c_{\mathrm{A}^-}}
= \frac{c_{\mathrm{A}^-}}{c_{\mathrm{A}}}.
\end{equation}
where $c_{\mathrm{A}}=c_{\mathrm{HA}}+c_{\mathrm{A}^-}$ is the total concentration of titratable acid groups irrespective of their ionization state.
Then, we can characterize the acid-base ionization equilibrium using the ionization degree and pH, defined as
\begin{equation}
\mathrm{pH} = -\log_{10} a_{\mathrm{H^{+}}} \overset{\mathrm{ideal}}{=} -\log_{10} (c_{\mathrm{H^{+}}} / c^{\ominus})
\end{equation}
Substituting for the ionization degree and pH into the expression for $K_A$ we obtain the Henderson-Hasselbalch equation
\begin{equation}
\mathrm{pH}-\mathrm{p}K_A = \log_{10} \frac{\alpha}{1-\alpha}
\end{equation}
One result of the Henderson-Hasselbalch equation is that at a fixed pH value the ionization degree of an ideal acid is independent of concentration. Another implication is, that the degree of ionization does not depend on the absolute values of $\mathrm{p}K_A$ and $\mathrm{pH}$, but only on their difference, $\mathrm{pH}-\mathrm{p}K_A$.
Constant pH Method
The constant pH method Reed1992 is designed to simulate an acid-base ionization reaction at a given pH. It assumes that the simulated system is coupled to an implicit reservoir of $\mathrm{H^+}$ ions but exchange of ions with this reservoir is not explicitly simulated. Therefore, the concentration of ions in the simulation box is not equal to the concentration of $\mathrm{H^+}$ ions at the chosen pH. This may lead to artifacts when simulating interacting systems, especially at high of low pH values. Discussion of these artifacts is beyond the scope of this tutorial (see e.g. Landsgesell2019 for further details).
In ESPResSo, the forward step of the ionization reaction (from left to right) is implemented by
changing the chemical identity (particle type) of a randomly selected $\mathrm{HA}$ particle to $\mathrm{A}^-$, and inserting another particle that represents a neutralizing counterion. The neutralizing counterion is not necessarily an $\mathrm{H^+}$ ion. Therefore, we give it a generic name $\mathrm{B^+}$. In the reverse direction (from right to left), the chemical identity (particle type) of a randomly selected $\mathrm{A}^{-}$ is changed to $\mathrm{HA}$, and a randomly selected $\mathrm{B}^+$ is deleted from the simulation box. The probability of proposing the forward reaction step is $P_\text{prop}=N_\mathrm{HA}/N_0$, and probability of proposing the reverse step is $P_\text{prop}=N_\mathrm{A}/N_0$. The trial move is accepted with the acceptance probability
$$ P_{\mathrm{acc}} = \operatorname{min}\left(1, \exp(-\beta \Delta E_\mathrm{pot} \pm \ln(10) \cdot (\mathrm{pH - p}K_A) ) \right)$$
Here $\Delta E_\text{pot}$ is the potential energy change due to the reaction, while $\text{pH - p}K$ is an input parameter.
The signs $\pm 1$ correspond to the forward and reverse direction of the ionization reaction, respectively.
Setup
The inputs that we need to define our system in the simulation include
* concentration of the titratable units c_acid
* dissociation constant pK
* Bjerrum length Bjerrum
* system size (given by the number of titratable units) N_acid
* concentration of added salt c_salt_SI
* pH
From the concentration of titratable units and the number of titratable units we calculate the box length.
We create a system with this box size.
From the salt concentration we calculate the number of additional salt ion pairs that should be present in the system.
We set the dissociation constant of the acid to $\mathrm{p}K_A=4.88$, that is the acidity constant of propionic acid. We choose propionic acid because its structure is closest to the repeating unit of poly(acrylic acid), the most commonly used weak polyacid.
We will simulate multiple pH values, the range of which is determined by the parameters offset and num_pHs.
End of explanation
# create the particles
##################################################
# we need to define bonds before creating polymers
hb = HarmonicBond(k=30, r_0=1.0)
system.bonded_inter.add(hb)
# create the polymer composed of ionizable acid groups, initially in the ionized state
polymers = polymer.positions(n_polymers=1,
beads_per_chain=N_acid,
bond_length=0.9, seed=23)
for polymer in polymers:
for index, position in enumerate(polymer):
id = len(system.part)
system.part.add(id=id, pos=position, type=TYPE_A, q=q_A)
if index > 0:
system.part[id].add_bond((hb, id - 1))
# add the corresponding number of H+ ions
for index in range(N_acid):
system.part.add(pos=np.random.random(3)*Box_L_in_sigma, type=TYPE_B, q=q_B)
# add salt ion pairs
for index in range(N_salt):
system.part.add(pos=np.random.random(
3)*Box_L_in_sigma, type=TYPE_Na, q=q_Na)
system.part.add(pos=np.random.random(
3)*Box_L_in_sigma, type=TYPE_Cl, q=q_Cl)
# set up the WCA interaction between all particle pairs
if USE_WCA:
types = [TYPE_HA, TYPE_A, TYPE_B, TYPE_Na, TYPE_Cl]
for type_1 in types:
for type_2 in types:
system.non_bonded_inter[type_1, type_2].lennard_jones.set_params(
epsilon=1.0, sigma=1.0,
cutoff=2**(1.0 / 6), shift="auto")
# run a steepest descent minimization to relax overlaps
system.integrator.set_steepest_descent(
f_max=0, gamma=0.1, max_displacement=0.1)
system.integrator.run(20)
system.integrator.set_vv() # to switch back to velocity Verlet
# short integration to let the system relax
system.integrator.run(steps=1000)
# if needed, set up and tune the Coulomb interaction
if USE_ELECTROSTATICS:
print("set up and tune p3m, please wait....")
p3m = electrostatics.P3M(prefactor=Bjerrum.to(
'sigma').magnitude, accuracy=1e-3)
system.actors.add(p3m)
p3m_params = p3m.get_params()
# for key in list(p3m_params.keys()):
# print("{} = {}".format(key, p3m_params[key]))
print(p3m.get_params())
print("p3m, tuning done")
else:
# this speeds up the simulation of dilute systems with small particle numbers
system.cell_system.set_n_square()
print("Done adding particles and interactions")
Explanation: After defining the simulation parameters, we set up the system that we want to simulate. It is a polyelectrolyte chain with some added salt that is used to control the ionic strength of the solution. For the first run, we set up the system without any steric repulsion and without electrostatic interactions. In the next runs, we will add the steric repulsion and electrostatic interactions to observe their effect on the ionization.
End of explanation
RE = reaction_ensemble.ConstantpHEnsemble(
temperature=1, exclusion_radius=1.0, seed=77)
Explanation: After setting creating the particles we initialize the reaction ensemble by setting the temperature, exclusion radius and seed of the random number generator. We set the temperature to unity, that determines that our reduced unit of energy will be $\varepsilon=1k_{\mathrm{B}}T$. In an interacting system the exclusion radius ensures that particle insertions too close to other particles are not attempted. Such insertions would make the subsequent Langevin dynamics integration unstable. If the particles are not interacting, we can set the exclusion radius to $0.0$. Otherwise, $1.0$ is a good value. We set the seed to a constant value to ensure reproducible results.
End of explanation
RE.add_reaction(gamma=K, reactant_types=[TYPE_HA], reactant_coefficients=[1],
product_types=[TYPE_A, TYPE_B], product_coefficients=[1, 1],
default_charges={TYPE_HA: q_HA, TYPE_A: q_A, TYPE_B: q_B})
print(RE.get_status())
Explanation: The next step is to define the reaction system. The order in which species are written in the lists of reactants and products is very important for ESPResSo. When a reaction move is performed, identity of the first species in the list of reactants is changed to the first species in the list of products, the second reactant species is changed to the second product species, and so on. If the reactant list has more species than the product list, then excess reactant species are deleted from the system. If the product list has more species than the reactant list, then product the excess product species are created and randomly placed inside the simulation box. This convention is especially important if some of the species belong to a chain-like molecule, and cannot be placed at an arbitrary position.
In the example below, the order of reactants and products ensures that identity of $\mathrm{HA}$ is changed to $\mathrm{A^{-}}$ and vice versa, while $\mathrm{H^{+}}$ is inserted/deleted in the reaction move. Reversing the order of products in our reaction (i.e. from product_types=[TYPE_B, TYPE_A] to product_types=[TYPE_A, TYPE_B]), would result in a reaction move, where the identity HA would be changed to $\mathrm{H^{+}}$, while $\mathrm{A^{-}}$ would be inserted/deleted at a random position in the box. We also assign charges to each type because the charge will play an important role later, in simulations with electrostatic interactions.
End of explanation
# the reference data from Henderson-Hasselbalch equation
def ideal_alpha(pH, pK):
return 1. / (1 + 10**(pK - pH))
# empty lists as placeholders for collecting data
numAs_at_each_pH = [] # number of A- species observed at each sample
# run a productive simulation and collect the data
print("Simulated pH values: ", pHs)
for pH in pHs:
print("Run pH {:.2f} ...".format(pH))
RE.constant_pH = pH
numAs_current = [] # temporary data storage for a given pH
RE.reaction(20*N_acid + 1) # pre-equilibrate to the new pH value
for i in range(num_samples):
if np.random.random() < PROB_REACTION:
# should be at least one reaction attempt per particle
RE.reaction(N_acid + 1)
elif USE_WCA:
system.integrator.run(steps=1000)
numAs_current.append(system.number_of_particles(type=TYPE_A))
numAs_at_each_pH.append(numAs_current)
print("measured number of A-: {0:.2f}, (ideal: {1:.2f})".format(
np.mean(numAs_current), N_acid*ideal_alpha(pH, pK)))
print("finished")
Explanation: Next, we perform simulations at different pH values. The system must be equilibrated at each pH before taking samples.
Calling RE.reaction(X) attempts in total X reactions (in both backward and forward direction).
End of explanation
# statistical analysis of the results
def block_analyze(input_data, n_blocks=16):
data = np.array(input_data)
block = 0
# this number of blocks is recommended by Janke as a reasonable compromise
# between the conflicting requirements on block size and number of blocks
block_size = int(data.shape[1] / n_blocks)
print("block_size:", block_size)
# initialize the array of per-block averages
block_average = np.zeros((n_blocks, data.shape[0]))
# calculate averages per each block
for block in range(0, n_blocks):
block_average[block] = np.average(
data[:, block * block_size: (block + 1) * block_size], axis=1)
# calculate the average and average of the square
av_data = np.average(data, axis=1)
av2_data = np.average(data * data, axis=1)
# calculate the variance of the block averages
block_var = np.var(block_average, axis=0)
# calculate standard error of the mean
err_data = np.sqrt(block_var / (n_blocks - 1))
# estimate autocorrelation time using the formula given by Janke
# this assumes that the errors have been correctly estimated
tau_data = np.zeros(av_data.shape)
for val in range(0, av_data.shape[0]):
if av_data[val] == 0:
# unphysical value marks a failure to compute tau
tau_data[val] = -1.0
else:
tau_data[val] = 0.5 * block_size * n_blocks / (n_blocks - 1) * block_var[val] \
/ (av2_data[val] - av_data[val] * av_data[val])
return av_data, err_data, tau_data, block_size
# estimate the statistical error and the autocorrelation time using the formula given by Janke
av_numAs, err_numAs, tau, block_size = block_analyze(numAs_at_each_pH)
print("av = ", av_numAs)
print("err = ", err_numAs)
print("tau = ", tau)
# calculate the average ionization degree
av_alpha = av_numAs/N_acid
err_alpha = err_numAs/N_acid
# plot the simulation results compared with the ideal titration curve
plt.figure(figsize=(10, 6), dpi=80)
plt.errorbar(pHs - pK, av_alpha, err_alpha, marker='o', linestyle='none',
label=r"simulation")
pHs2 = np.linspace(pHmin, pHmax, num=50)
plt.plot(pHs2 - pK, ideal_alpha(pHs2, pK), label=r"ideal")
plt.xlabel('pH-p$K$', fontsize=16)
plt.ylabel(r'$\alpha$', fontsize=16)
plt.legend(fontsize=16)
plt.show()
Explanation: Results
Finally we plot our results and compare them to the analytical results obtained from the Henderson-Hasselbalch equation.
Statistical Uncertainty
The molecular simulation produces a sequence of snapshots of the system, that
constitute a Markov chain. It is a sequence of realizations of a random process, where
the next value in the sequence depends on the preceding one. Therefore,
the subsequent values are correlated. To estimate statistical error of the averages
determined in the simulation, one needs to correct for the correlations.
Here, we will use a rudimentary way of correcting for correlations, termed the binning method.
We refer the reader to specialized literature for a more sophisticated discussion, for example Janke2002. The general idea is to group a long sequence of correlated values into a rather small number of blocks, and compute an average per each block. If the blocks are big enough, they
can be considered uncorrelated, and one can apply the formula for standard error of the mean of uncorrelated values. If the number of blocks is small, then they are uncorrelated but the obtained error estimates has a high uncertainty. If the number of blocks is high, then they are too short to be uncorrelated, and the obtained error estimates are systematically lower than the correct value. Therefore, the method works well only if the sample size is much greater than the autocorrelation time, so that it can be divided into a sufficient number of mutually uncorrelated blocks.
In the example below, we use a fixed number of 16 blocks to obtain the error estimates.
End of explanation
# check if the blocks contain enough data for reliable error estimates
print("uncorrelated samples per block:\nblock_size/tau = ",
block_size/tau)
threshold = 10. # block size should be much greater than the correlation time
if np.any(block_size / tau < threshold):
print("\nWarning: some blocks may contain less than ", threshold, "uncorrelated samples."
"\nYour error estimated may be unreliable."
"\nPlease, check them using a more sophisticated method or run a longer simulation.")
print("? block_size/tau > threshold ? :", block_size/tau > threshold)
else:
print("\nAll blocks seem to contain more than ", threshold, "uncorrelated samples.\
Error estimates should be OK.")
Explanation: The simulation results for the non-interacting case very well compare with the analytical solution of Henderson-Hasselbalch equation. There are only minor deviations, and the estimated errors are small too. This situation will change when we introduce interactions.
It is useful to check whether the estimated errors are consistent with the assumptions that were used to obtain them. To do this, we follow [Janke2000] to estimate the number of uncorrelated samples per block, and check whether each block contains a sufficient number of uncorrelated samples (we choose 10 uncorrelated samples per block as the threshold value).
Intentionally, we make our simulation slightly too short, so that it does not produce enough uncorrelated samples. We encourage the reader to vary the number of blocks or the number of samples to see how the estimated error changes with these parameters.
End of explanation
# plot the deviations from the ideal result
plt.figure(figsize=(10, 6), dpi=80)
ylim = np.amax(abs(av_alpha-ideal_alpha(pHs, pK)))
plt.ylim((-1.5*ylim, 1.5*ylim))
plt.errorbar(pHs - pK, av_alpha-ideal_alpha(pHs, pK),
err_alpha, marker='o', linestyle='none', label=r"simulation")
plt.plot(pHs - pK, 0.0*ideal_alpha(pHs, pK), label=r"ideal")
plt.xlabel('pH-p$K$', fontsize=16)
plt.ylabel(r'$\alpha - \alpha_{ideal}$', fontsize=16)
plt.legend(fontsize=16)
plt.show()
Explanation: To look in more detail at the statistical accuracy, it is useful to plot the deviations from the analytical result. This provides another way to check the consistency of error estimates. About 68% of the results should be within one error bar from the analytical result, whereas about 95% of the results should be within two times the error bar. Indeed, if you plot the deviations by running the script below, you should observe that most of the results are within one error bar from the analytical solution, a smaller fraction of the results is slightly further than one error bar, and one or two might be about two error bars apart. Again, this situation will change when we introduce interactions because the ionization of the interacting system should deviate from the Henderson-Hasselbalch equation.
End of explanation
# average concentration of B+ is the same as the concentration of A-
av_c_Bplus = av_alpha*c_acid
err_c_Bplus = err_alpha*c_acid # error in the average concentration
full_pH_range = np.linspace(2, 12, 100)
ideal_c_Aminus = ideal_alpha(full_pH_range, pK)*c_acid
ideal_c_OH = np.power(10.0, -(pKw - full_pH_range))*ureg('mol/L')
ideal_c_H = np.power(10.0, -full_pH_range)*ureg('mol/L')
# ideal_c_M is calculated from electroneutrality
ideal_c_M = np.maximum((ideal_c_Aminus + ideal_c_OH - ideal_c_H).to(
'mol/L').magnitude, np.zeros_like(full_pH_range))*ureg('mol/L')
# plot the simulation results compared with the ideal results of the cations
plt.figure(figsize=(10, 6), dpi=80)
plt.errorbar(pHs,
av_c_Bplus.to('mol/L').magnitude,
err_c_Bplus.to('mol/L').magnitude,
marker='o', c="tab:blue", linestyle='none',
label=r"measured $c_{\mathrm{B^+}}$", zorder=2)
plt.plot(full_pH_range, ideal_c_H.to('mol/L').magnitude, c="tab:green",
label=r"ideal $c_{\mathrm{H^+}}$", zorder=0)
plt.plot(full_pH_range, ideal_c_M.to('mol/L').magnitude, c="tab:orange",
label=r"ideal $c_{\mathrm{M^+}}$", zorder=0)
plt.plot(full_pH_range, ideal_c_Aminus.to('mol/L').magnitude, c="tab:blue", ls=(0, (5, 5)),
label=r"ideal $c_{\mathrm{A^-}}$", zorder=1)
plt.yscale("log")
plt.ylim(1e-6,)
plt.xlabel('input pH', fontsize=16)
plt.ylabel(r'concentration $c$ $[\mathrm{mol/L}]$', fontsize=16)
plt.legend(fontsize=16)
plt.show()
Explanation: The Neutralizing Ion $\mathrm{B^+}$
Up to now we did not discuss the chemical nature the neutralizer $\mathrm{B^+}$. The added salt is not relevant in this context, therefore we omit it from the discussion. The simplest case to consider is what happens if you add the acidic polymer to pure water ($\mathrm{pH} = 7$). Some of the acid groups dissociate and release $\mathrm{H^+}$ ions into the solution. The pH decreases to a value that depends on $\mathrm{p}K_{\mathrm{A}}$ and on the concentration of ionizable groups. Now, three ionic species are present in the solution: $\mathrm{H^+}$, $\mathrm{A^-}$, and $\mathrm{OH^-}$. Because the reaction generates only one $\mathrm{B^+}$ ion in the simulation box, we conclude that in this case the $\mathrm{B^+}$ ions correspond to $\mathrm{H^+}$ ions. The $\mathrm{H^+}$ ions neutralize both the $\mathrm{A^-}$ and the $\mathrm{OH^-}$ ions. At acidic pH there are only very few $\mathrm{OH^-}$ ions and nearly all $\mathrm{H^+}$ ions act as a neutralizer for the $\mathrm{A^-}$ ions. Therefore, the concentration of $\mathrm{B^+}$ is very close to the concentration of $\mathrm{H^+}$ in the real aqueous solution. Only very few $\mathrm{OH^-}$ ions, and the $\mathrm{H^+}$ ions needed to neutralize them, are missing in the simulation box, when compared to the real solution.
To achieve a more acidic pH (with the same pK and polymer concentration), we need to add an acid to the system. We can do that by adding a strong acid, such as HCl or $\mathrm{HNO}_3$. We will denote this acid by a generic name $\mathrm{HX}$ to emphasize that in general its anion can be different from the salt anion $\mathrm{Cl^{-}}$. Now, there are 4 ionic species in the solution: $\mathrm{H^+}$, $\mathrm{A^-}$, $\mathrm{OH^-}$, and $\mathrm{X^-}$ ions. By the same argument as before, we conclude that $\mathrm{B^+}$ ions correspond to $\mathrm{H^+}$ ions. The $\mathrm{H^+}$ ions neutralize the $\mathrm{A^-}$, $\mathrm{OH^-}$, and the $\mathrm{X^-}$ ions. Because the concentration of $\mathrm{X^-}$ is not negligible anymore, the concentration of $\mathrm{B^+}$ in the simulation box differs from the $\mathrm{H^+}$ concentration in the real solution. Now, many more ions are missing in the simulation box, as compared to the real solution: Few $\mathrm{OH^-}$ ions, many $\mathrm{X^-}$ ions, and all the $\mathrm{H^+}$ ions that neutralize them.
To achieve a neutral pH we need to add some base to the system to neutralize the polymer.
In the simplest case we add an alkali metal hydroxide, such as $\mathrm{NaOH}$ or $\mathrm{KOH}$, that we will generically denote as $\mathrm{MOH}$. Now, there are 4 ionic species in the solution: $\mathrm{H^+}$, $\mathrm{A^-}$, $\mathrm{OH^-}$, and $\mathrm{M^+}$. In such situation, we can not clearly attribute a specific chemical identity to the $\mathrm{B^+}$ ions. However, only very few $\mathrm{H^+}$ and $\mathrm{OH^-}$ ions are present in the system at $\mathrm{pH} = 7$. Therefore, we can make the approximation that at this pH, all $\mathrm{A^-}$ are neutralized by the $\mathrm{M^+}$ ions, and the $\mathrm{B^+}$ correspond to $\mathrm{M^+}$. Then, the concentration of $\mathrm{B^+}$ also corresponds to the concentration of $\mathrm{M^+}$ ions. Now, again only few ions are missing in the simulation box, as compared to the real solution: Few $\mathrm{OH^-}$ ions, and few $\mathrm{H^+}$ ions.
To achieve a basic pH we need to add even more base to the system to neutralize the polymer.
Again, there are 4 ionic species in the solution: $\mathrm{H^+}$, $\mathrm{A^-}$, $\mathrm{OH^-}$, and $\mathrm{M^+}$ and we can not clearly attribute a specific chemical identity to the $\mathrm{B^+}$ ions. Because only very few $\mathrm{H^+}$ ions should be present in the solution, we can make the approximation that at this pH, all $\mathrm{A^-}$ ions are neutralized by the $\mathrm{M^+}$ ions, and therefore $\mathrm{B^+}$ ions in the simulation correspond to $\mathrm{M^+}$ ions in the real solution. Because additional $\mathrm{M^+}$ ions in the real solution neutralize the $\mathrm{OH^-}$ ions, the concentration of $\mathrm{B^+}$ does not correspond to the concentration of $\mathrm{M^+}$ ions. Now, again many ions are missing in the simulation box, as compared to the real solution: Few $\mathrm{H^+}$ ions, many $\mathrm{OH^-}$ ions, and a comparable amount of the $\mathrm{M^+}$ ions.
To further illustrate this subject, we compare the concentration of the neutralizer ion $\mathrm{B^+}$ calculated in the simulation with the expected number of ions of each species. At a given pH and pK we can calculate the expected degree of ionization from the Henderson Hasselbalch equation. Then we apply the electroneutrality condition
$$c_\mathrm{A^-} + c_\mathrm{OH^-} + c_\mathrm{X^-} = c_\mathrm{H^+} + c_\mathrm{M^+}$$
where we use either $c_\mathrm{X^-}=0$ or $c_\mathrm{M^+}=0$ because we always only add extra acid or base, but never both. Adding both would be equivalent to adding extra salt $\mathrm{MX}$.
We obtain the concentrations of $\mathrm{OH^-}$ and $\mathrm{H^+}$ from the input pH value, and substitute them to the electroneutrality equation to obtain
$$\alpha c_\mathrm{acid} + 10^{-(\mathrm{p}K_\mathrm{w} - \mathrm{pH})} + 10^{-\mathrm{pH}} = c_\mathrm{M^+} - c_\mathrm{X^-}$$
Depending on whether the left-hand side of this equation is positive or negative we know whether we should add $\mathrm{M^+}$ or $\mathrm{X^-}$ ions.
End of explanation
ideal_c_X = np.maximum(-(ideal_c_Aminus + ideal_c_OH - ideal_c_H).to(
'mol/L').magnitude, np.zeros_like(full_pH_range))*ureg('mol/L')
ideal_ionic_strength = 0.5 * \
(ideal_c_X + ideal_c_M + ideal_c_H + ideal_c_OH + 2*c_salt)
# in constant-pH simulation ideal_c_Aminus = ideal_c_Bplus
cpH_ionic_strength = 0.5*(ideal_c_Aminus + 2*c_salt)
cpH_ionic_strength_measured = 0.5*(av_c_Bplus + 2*c_salt)
cpH_error_ionic_strength_measured = 0.5*err_c_Bplus
plt.figure(figsize=(10, 6), dpi=80)
plt.errorbar(pHs,
cpH_ionic_strength_measured.to('mol/L').magnitude,
cpH_error_ionic_strength_measured.to('mol/L').magnitude,
c="tab:blue",
linestyle='none', marker='o',
label=r"measured", zorder=3)
plt.plot(full_pH_range,
cpH_ionic_strength.to('mol/L').magnitude,
c="tab:blue",
ls=(0, (5, 5)),
label=r"cpH", zorder=2)
plt.plot(full_pH_range,
ideal_ionic_strength.to('mol/L').magnitude,
c="tab:orange",
linestyle='-',
label=r"ideal", zorder=1)
plt.yscale("log")
plt.xlabel('input pH', fontsize=16)
plt.ylabel(r'Ionic Strength [$\mathrm{mol/L}$]', fontsize=16)
plt.legend(fontsize=16)
plt.show()
Explanation: The plot shows that at intermediate pH the concentration of $\mathrm{B^+}$ ions is approximately equal to the concentration of $\mathrm{M^+}$ ions. Only at one specific $\mathrm{pH}$ the concentration of $\mathrm{B^+}$ ions is equal to the concentration of $\mathrm{M^+}$ ions. This is the pH one obtains when dissolving the weak acid $\mathrm{A}$ in pure water.
In an ideal system, the ions missing in the simulation have no effect on the ionization degree. In an interacting system, the presence of ions in the box affects the properties of other parts of the system. Therefore, in an interacting system this discrepancy is harmless only at intermediate pH. The effect of the small ions on the rest of the system can be estimated from the overall the ionic strength.
$$ I = \frac{1}{2}\sum_i c_i z_i^2 $$
End of explanation |
8,403 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The inimitable schema library
Part 2
By @stavros
Structured data is everywhere
Step1: How do we validate it?
Step2: <div style="text-align
Step4: Tricks | Python Code:
data = {
"operation": "upload", # "upload" or "delete"
"timeout": 3600, # Optional, how long the sig should be valid for.
"md5": "deadbeefetc", # Optional
"files": {
"5gbCtxlvljhx5-al": {
"size": 65536,
"shred_date": "2015-05-02T00:00:00Z" # Must be a date from now up to 4 months in the future.
},
}
}
Explanation: The inimitable schema library
Part 2
By @stavros
Structured data is everywhere
End of explanation
if data.get("operation") not in ["upload", "delete"]:
raise SomeError("Operation not valid.")
try:
timeout = int(data.get("timeout"))
except ValueError:
raise SomeError("Timeout not a number.")
if not 0 < timeout <= 3600:
raise SomeError("Timeout not up to one hour in the future.")
if data.get("md5") and not isinstance(data["md5"], str):
raise SomeError("md5 is not a valid MD5 hash.")
if not isinstance(data.get("files"), dict):
raise SomeError("files must be a dictionary.")
# etc
Explanation: How do we validate it?
End of explanation
from schema import Schema, And, Or, Optional, Use, SchemaError
schema = Schema({
"foo": int,
Optional("hello"): "hi!",
})
data = {
"hello": "hi!",
"foo": 3,
}
schema.validate(data)
schema = Schema({
"foo": int,
Optional("hello"): "hi!",
})
data = {
"foo": 3,
}
schema.validate(data)
schema = Schema({
"foo": int,
Optional("hello"): "hi!",
})
data = {
"hello": "yo",
"foo": 3,
}
try:
schema.validate(data)
except SchemaError as e:
print e
Explanation: <div style="text-align: center"><img src="http://i.giphy.com/NsyUZQ6OJDVdu.gif" width="960" /></div>
Is there a better way?
No.
Just kidding, of course there is. Who asks "is there a better way?" if there's no better way? No one, that's who.
Presenting the schema library.
End of explanation
schema = Schema(And(int, fetch_user_by_id))
data = "/tmp/pythess"
try:
print schema.validate(data)
except SchemaError as e:
print(e)
schema = Schema(range(10))
data = [2, 4, 6, 2, 2, 2, 20]
try:
print schema.validate(data)
except SchemaError as e:
print(e)
schema = Schema({
"shred_date": And(
basestring,
Use(ciso8601.parse_datetime_unaware),
datetime.datetime,
Use(lambda d: (d - datetime.datetime.now()).days),
lambda d: 0 < d < 120,
error="shred_date must be a valid future date string up to 120 days from now.")
})
data = {
"shred_date": "2016-10-10T00:00:00Z",
}
try:
print schema.validate(data)
except SchemaError as e:
print(e)
operations = {"upload": "PUT", "delete": "DELETE", "replace": "POST"}
schema = Schema(And(Use(json.loads, error="Invalid JSON"), {
"operation": And(lambda s: s in operations.keys(), Use(operations.get), error="Valid operations are: %s" % ", ".join(operations.keys())),
"files": {And(basestring, lambda s: len(s) > 5, error="Filename must be a string longer than 5 characters."): {
Optional("size"): And(int, lambda i: i > 0, error="Size must be a positive integer."),
Optional("shred_date"): And(
basestring, # Make sure it's a string.
Use(ciso8601.parse_datetime_unaware), # Parse it into a date.
datetime.datetime, # Make sure it's a date now.
lambda d: 0 < (d - datetime.datetime.now()).days < 120, # Make sure it's in the future, up to 120 days.
error="shred_date must be a valid future date string up to 120 days from now.")
}}}))
data = {
"operation": "repklace",
"files": {
"file.nam": {
"size": 100,
"shred_date": "2016-01-01T00:00:00Z"
}}}}
try:
print schema.validate(data)
except SchemaError as e:
print e
Explanation: Tricks
End of explanation |
8,404 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project 1
Step1: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation
Step3: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset
Step4: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable
Step5: Answer
Step6: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint
Step7: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint
Step9: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint
Step10: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
Step11: Answer
Step12: Answer | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.cross_validation import ShuffleSplit
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
Explanation: Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project 1: Predicting Boston Housing Prices
Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.
The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:
- 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed.
- 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed.
- The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded.
- The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation.
Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
# TODO: Minimum price of the data
minimum_price = np.min(prices)
# TODO: Maximum price of the data
maximum_price = np.max(prices)
# TODO: Mean price of the data
mean_price = np.mean(prices)
# TODO: Median price of the data
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
Explanation: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation: Calculate Statistics
For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.
In the code cell below, you will need to implement the following:
- Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices.
- Store each calculation in their respective variable.
End of explanation
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
Explanation: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood):
- 'RM' is the average number of rooms among homes in the neighborhood.
- 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor).
- 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood.
Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each.
Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7?
Answer:
An increase in 'RM' must increase the price because more rooms must mean higher values, and lesser rooms mean lesser values.
An increase in ´'LSTAT' must decrease the price, because the neighborhood tends to depreciate because the existence of lower class homeowners, it could represent neighborhood with poor maintenance and this devaluate the neighborhood. Lesser 'LSTAT' could be interpreted like a more exclusive neighborhood, and it could produce higher prices of the homes.
An increase in 'PTRATIO' could mean a better education in the nearest schools and it could increase the demand in the zone by homeowners with children, and it produce a higher price. The contrary, could produce lesser demand because lesser parents wants to leave near to schools with better standards like more teachers and produce lower prices compared with others neighborhoods.
Developing a Model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
Implementation: Define a Performance Metric
It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions.
The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 always fails to predict the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is no better than one that naively predicts the mean of the target variable.
For the performance_metric function in the code cell below, you will need to implement the following:
- Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict.
- Assign the performance score to the score variable.
End of explanation
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
Explanation: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable:
| True Value | Prediction |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
Would you consider this model to have successfully captured the variation of the target variable? Why or why not?
Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination.
End of explanation
# TODO: Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.20, random_state=123)
# Success
print "Training and testing split was successful."
Explanation: Answer: Model has a coefficient of determination, R^2, of 0.923. In this case the model capture well the variation, because it is nearest to 1, the model explain 92.3% the residual variance.
Implementation: Shuffle and Split Data
Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.
For the code cell below, you will need to implement the following:
- Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.
- Split the data into 80% training and 20% testing.
- Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.
- Assign the train and testing splits to X_train, X_test, y_train, and y_test.
End of explanation
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
Explanation: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint: What could go wrong with not having a way to test your model?
Answer: the importance of split the data randomly, is by splitting the data the bias caused by the entire sample is removed in some way, to validate the quality of the model with the set of test data. And quoting from the course "By separating training and testing sets and graphing performance on each separately, we can get a better idea of how well the model can generalize to unseen data."
Analyzing Model Performance
In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
Learning Curves
The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination.
Run the code cell below and use these graphs to answer the following question.
End of explanation
vs.ModelComplexity(X_train, y_train)
Explanation: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint: Are the learning curves converging to particular scores?
Answer: the max_depth = 3 is the better model, because R<sup>2</sup> has the highest score of the four graphics.
The Training score curve decrease as long as training points are augmented; by the graph, it appears that the asymptote is in a ratio of 0.8. The testing curve grows in ascending, and and also it tends to stabilize at 0.8, and as in the training curve it appears that the asymptote is in a ratio of 0.8. In conclusion, the two curves (training and testing) converge to a value close to 0.80.
As chart show, the training points converge at 300 training points and it looks steady for many training points (350 and 400), in conclusion add training points seems convenient until 300, after that, the model hasn't significantly improve.
Complexity Curves
The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function.
Run the code cell below and use this graph to answer the following two questions.
End of explanation
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': (1,2,3,4,5,6,7,8,9,10)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, param_grid= params, cv=cv_sets ,scoring = scoring_fnc)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
Explanation: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint: How do you know when a model is suffering from high bias or high variance?
Answer: When the model is trained with maximum depth of 1, the curve is showing high bias. When the model is trained with maximum depth of 10 it is showing high variance. The model with maximum depth of 1 is biased because both curves are too close, this showing the model here is underfited. And the opposite is showed at 10 as maximum depth, because both curves are separating one from the other, as depth is added to the model.
Question 6 - Best-Guess Optimal Model
Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?
Answer: I think, a value among 4 or 5 is the best for the model, because at this point the variance begin to increasing at inflexion point of training curve. More depth could produce more variance, lesser depth could produce high bias. Applying Occam's Razor principle, which say the "the simplest model that fits the data is also the most plausible" (Abu-Mustafa et al., 2012), the best depth is 4, because this imply a simpler model.
Evaluating Model Performance
In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model.
Question 7 - Grid Search
What is the grid search technique and how it can be applied to optimize a learning algorithm?
Answer: The Grid Search thecnique is a way to explore many combinantions of paremeters, the best performance is selected through assessed using cross-validation cross validated. "is a way of systematically working through multiple combinations of parameter tunes, cross-validating as it goes to determine which tune gives the best performance".
Question 8 - Cross-Validation
What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?
Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?
Answer: Is technique used in problems with splitting into training and testing data, chosing the best learning result through best validation result. The method split the sample in k-folds of equal sizes and running k separate learning experiments, then all results from k experiments are averaged. The result is more precise than one partition alternative (one training set and one test set) and it has the benefit of being able to use all sample data.
Implementation: Fitting a Model
Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms.
For the fit_model function in the code cell below, you will need to implement the following:
- Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object.
- Assign this object to the 'regressor' variable.
- Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable.
- Use make_scorer from sklearn.metrics to create a scoring function object.
- Pass the performance_metric function as a parameter to the object.
- Assign this scoring function to the 'scoring_fnc' variable.
- Use GridSearchCV from sklearn.grid_search to create a grid search object.
- Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object.
- Assign the GridSearchCV object to the 'grid' variable.
End of explanation
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
Explanation: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
End of explanation
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
Explanation: Answer: The maximum depth is 4, and is among the range I've chosen, The inflexion point looks like a good start to choose the right model.
Question 10 - Predicting Selling Prices
Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:
| Feature | Client 1 | Client 2 | Client 3 |
| :---: | :---: | :---: | :---: |
| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
| Neighborhood poverty level (as %) | 17% | 32% | 3% |
| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?
Hint: Use the statistics you calculated in the Data Exploration section to help justify your response.
Run the code block below to have your optimized model make predictions for each client's home.
End of explanation
vs.PredictTrials(features, prices, fit_model, client_data)
Explanation: Answer: First at all, I rounded the prices, because is difficult in reality explain the exact number where they come from. Next, for the client 1, I recommend a selling price of 410,000.00. For the client 2 I recommend a selling price of 233,000.00, and for the client 3 I recommend a price of 893,000.00. The prices looks reasonables giving the features, and all of them are beteewn the miminmun (105,000.00) and maximum (1,024,800.00) prices for the city and they where calculated based on a mathematical model that took into account the variables of number of rooms, the exclusive neighborhood level and the number of students per teacher from nearby schools.
More rooms may lead to think that the house would be worth more, and besides the number of rooms, is considered that the level of poor population in the sector is low compared to others (3%) , the exclusivity of the neighborhood becomes more evident for this house, and therefore should have a higher value in price, as it is showing in the model for the customer 3. This is also reinforced by having the neighborhood a better indicator of teachers per student, because this can show a more personalized education, thus increasing the attractiveness of housing and therefore a higher price.
The same analysis can be done for the other two customers,for the customer 1 can infer that your house is very close The same analysis can be done for the other two customers. For the customer 1, you can infer that your house is very close to the city average, one would think, that substantial improvements in your house could increase the value of the house. While for client 2, your house is in a neighborhood that has worse standards than other two houses, and this model is reflecting this situation in the forecasted price.
As conclusion the variables of poverty and education will add or detract from the physical part of the house, such as the number of rooms, new variables can improve forecasts of a model, as we are evaluating here
Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.
End of explanation |
8,405 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
As seen above we have a clear outsider that lies way outside SF, probably a typo. Hence we sort this datapoint out,
Step1: The points now all seem to be within SF borders
Step3: I will now look at the total squared error in relation to the number of clusters, to find the ideal knee bend,
Step4: As seen the error drops dramaticly as we move from 1 to 2 clusters. It also drops rather significantly from 2-3, though
not anything as much as the prior. The optimal solution would hence be either 2 or 3 clusters.
CSV exporter for D3 data | Python Code:
X = X[X['lon'] < -122]
X.plot(kind='scatter', x='lon', y='lat')
Explanation: As seen above we have a clear outsider that lies way outside SF, probably a typo. Hence we sort this datapoint out,
End of explanation
from sklearn.cluster import KMeans
#To work with out cluster we have to turn our panda dataframe into a numpy array,
np_X = np.array(X)
kmeans = KMeans(n_clusters=2)
kmeans.fit(np_X)
centroid = kmeans.cluster_centers_
labels = kmeans.labels_
print "The %s cluster centers are located at %s " %(len(centroid),centroid)
colors = ["g.","r.","c."]
for i in range(len(np_X)):
plt.plot(np_X[i][0],np_X[i][1],colors[labels[i]],markersize=10)
plt.scatter(centroid[:,0],centroid[:,1], marker = "x", s=150, linewidths = 5, zorder =10)
plt.show()
Explanation: The points now all seem to be within SF borders
End of explanation
from sklearn.cluster import KMeans
#To work with out cluster we have to turn our panda dataframe into a numpy array,
np_X = X
kmeans = KMeans(n_clusters=2)
kmeans.fit(np_X)
centroid = kmeans.cluster_centers_
classified_data = kmeans.labels_
labels = kmeans.labels_
print "The %s cluster centers are located at %s " %(len(centroid),centroid)
classified_data
#copy dataframe (may be memory intensive but just for illustration)
df_processed = X.copy()
df_processed['Cluster Class'] = pd.Series(classified_data, index=df_processed.index)
df_processed.head()
centroid_df = DataFrame(centroid)
centroid_df.head()
df_processed.plot(kind='scatter', x='lon', y='lat',
c = 'Cluster Class', label='datapoints');
import numpy
import pandas
from matplotlib import pyplot
import seaborn
seaborn.set(style='ticks')
numpy.random.seed(0)
N = 37
_genders= ['Female', 'Male', 'Non-binary', 'No Response']
df = pandas.DataFrame({
'Height (cm)': numpy.random.uniform(low=130, high=200, size=N),
'Weight (kg)': numpy.random.uniform(low=30, high=100, size=N),
'Gender': numpy.random.choice(_genders, size=N)
})
fg = seaborn.FacetGrid(data=df, hue='Gender', hue_order=_genders, aspect=1.61)
fg.map(pyplot.scatter, 'Weight (kg)', 'Height (cm)').add_legend()
########################################
import seaborn
seaborn.set(style='ticks')
fg = seaborn.FacetGrid(data=df_processed, hue='Cluster Class', hue_order=_classes, aspect=1.61)
fg.map(pyplot.scatter, 'Lat', 'Lon').add_legend()
from scipy.spatial import distance
def dist_euc(lon,lat,centroid):
data_cord = [lon,lat]
return distance.euclidean(data_cord,centroid)
df_processed['distance'] = df_processed.apply(lambda row: dist_euc(row['lon'], row['lat'],centroid[row['Cluster Class']]), axis=1)
df_processed.head()
ksum = []
def get_ksum(k):
lonList = X['lon'].tolist()
latList = X['lat'].tolist()
for i in range(1,k):
kmeans = KMeans(n_clusters=i)
kmeans.fit(X)
centroid = kmeans.cluster_centers_
labels = kmeans.labels_
tmp_sum = 0
for index, row in enumerate(lonList):
tmp_sum += dist_euc(lonList[index], latList[index], centroid[labels[index]])
ksum.append(tmp_sum)
get_ksum(10)
print ksum
#I Transform my data into a Dataframe to do easy and pretty plotting :-)
ksum_df = DataFrame(ksum, index = range(1,10))
ksum_df.plot()
Explanation: I will now look at the total squared error in relation to the number of clusters, to find the ideal knee bend,
End of explanation
import csv
csv_file = df_processed[['lon','lat','Cluster Class']].values
csv_file
with open('datapoints.csv','wb') as f:
w = csv.writer(f)
w.writerows(csv_file)
df_csv.head()
df_csv = X.copy(deep = True)
centroid_list = []
for i in range(1,7):
kmeans = KMeans(n_clusters=i)
kmeans.fit(X)
centroid = kmeans.cluster_centers_
labels = kmeans.labels_
column = "k%s" %i
df_csv[column] = labels
centroid_not_np = centroid.tolist()
centroid_list.append(centroid_not_np)
df_csv.head()
centroid_list
df_csv.to_csv('csv_clusters.csv', index=False)
with open('centroids.csv','wb') as csvfile:
w = csv.writer(csvfile,quoting=csv.QUOTE_MINIMAL)
w.writerows(centroid_list)
Explanation: As seen the error drops dramaticly as we move from 1 to 2 clusters. It also drops rather significantly from 2-3, though
not anything as much as the prior. The optimal solution would hence be either 2 or 3 clusters.
CSV exporter for D3 data
End of explanation |
8,406 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dice, Polls & Dirichlet Multinomials
As part of a longer term project to learn Bayesian Statistics, I'm currently reading Bayesian Data Analysis, 3rd Edition by Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin, commonly known as BDA3.
Although I've been using Bayesian statistics and probabilistic programming languages, like PyMC3, in projects for the last year or so, this book forces me to go beyond a pure practioner's approach to modeling, while still delivering very practical value.
Below are a few take aways from the earlier chapters in the book I found interesting. They are meant to hopefully inspire others to learn about Bayesian statistics, without trying to be overly formal about the math. If something doesn't look 100% to the trained mathematicians in the room, please let me know, or just squint a little harder. ;)
We'll cover
Step1: Just looking at a simple bar plot, we suspect that we might not be dealing with a fair die!
However, students of Bayesian statistics that we are, we'd like to go further and quantify our uncertainty in the fairness of the die and calculate the probability that someone slipped us loaded dice.
Step2: Let's set up a simple model in PyMC3 that not only calculates the posterior probability for $theta$ (i.e. the probability for each side of the die), but also estimates the bias for throwing a $6$.
We will use Deterministic variable, in addition to our unobserved (theta) and observed (results) variables.
For the prior on $theta$, we'll use a non-informative uniform distribution, by initializing the $Dirichlet$ prior with a series of 1s for the parameter a, one for each of the k possible outcomes. This is similar to initializing a $Beta$ distribution as $Beta(1, 1)$, which corresponds to the Uniform distribution.
Step3: Starting with version 3.5, PyMC3 includes a handy function to plot models in plate notation
Step4: Let's draw 1,000 samples from the joint posterior using the default NUTS sampler
Step5: From the traceplot, we can already see that one of the $theta$ posteriors isn't in line with the rest
Step6: We'll plot the posterior distributions for each $theta$ and compare it our reference value $p$ to see if the 95% HPD (Highest Posterior Density) interval includes $p = 1/6$.
Step7: We can clearly see that the HPD for the posterior probability for rolling a $6$ barely includes what we'd expect from a fair die.
To be more precise, let's plot the probability of our die being biased on $6$, by comparing $theta[Six]$ to $p$
Step8: Lastly, we can calculate the probability that the die is biased on $6$ by calculating the density to the right of our reference line at $0$
Step9: Better get some new dice...!
Polling #1
Let's turn our review of the Dirichlet-Multinomial distribution to another example, concerning polling data.
In section 3.4 of BDA3 on multivariate models and, specifically the section on Multinomial Models for Categorical Data, the authors include a, little dated, example of polling data in the 1988 Presidential race between George H.W. Bush and Michael Dukakis.
Here's the setup
Step10: We, again, set up a simple Dirichlet-Multinomial model and include a Deterministic variable that calculates the metric of interest - the difference in probability of respondents for Bush vs. Dukakis.
Step11: Looking at the % difference between respondents for Bush vs Dukakis, we can see that most of the density is greater than 0%, signifying a strong advantage for Bush in this poll.
We've also fit a $Beta$ distribution to this data via scipy.stats, and we can see that posterior of the difference of the 2 $theta$ values is a pretty good match.
Step12: Percentage of samples with bush_dukakis_diff > 0
Step13: Polling #2
As an extension to the previous model, the authors of BDA include an exercise in chapter 3.10 (Exercise 2) that presents us with polling data from the 1988 Presidential race, taking before and after the one of the debates.
Comparison of two multinomial observations
Step14: Convert to 2x3 array
Step15: Number of respondents in each survey
Step16: Number of respondents for the 2 major candidates in each survey
Step17: For this model, we'll need to set up the priors slightly differently. Instead of 1 set of thetas, we need 2, one for each survey (pre/post debate).
To do that without creating specific pre/post versions of each variable, we'll take advantage of PyMC3's shape parameter, available for most (all?) distributions.
In this case, we'll need a 2-dimensional shape parameter, representing the number of debates n_debates and the number of choices in candidates n_candidates
Step18: Thus, we need to initialize a Dirichlet distribution prior with shape (2,3) and then refer to the relevant parameters by index where needed.
Step19: For models with multi-dimensional shapes, it's always good to check the shapes of the various parameters before sampling
Step20: The plate notation visual can also help with that
Step21: Let's sample with a slightly higher number of draws and tuning steps
Step22: We'll take a look at the means of the posteriors for theta, indicating the % of support for each candidate pre & post debate
Step23: Just from the means, we can see that the number of Bush supporters has likely decreased post debate from 48.8% to 46.3% (as a % of supporters of the 2 major candidates)
Step24: Let's compare the results visually, by plotting the posterior distributions of the pre/post debate values for % responses for Bush and the posterior for pre/post difference in Bush supporters
Step25: From the second plot, we can already see that a large portion of the posterior density is below 0, but let's be precise and actually calculate the probability that support shifted towards Bush after the debate | Python Code:
y = np.asarray([20, 21, 17, 19, 17, 28])
k = len(y)
p = 1/k
n = y.sum()
n, p
Explanation: Dice, Polls & Dirichlet Multinomials
As part of a longer term project to learn Bayesian Statistics, I'm currently reading Bayesian Data Analysis, 3rd Edition by Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin, commonly known as BDA3.
Although I've been using Bayesian statistics and probabilistic programming languages, like PyMC3, in projects for the last year or so, this book forces me to go beyond a pure practioner's approach to modeling, while still delivering very practical value.
Below are a few take aways from the earlier chapters in the book I found interesting. They are meant to hopefully inspire others to learn about Bayesian statistics, without trying to be overly formal about the math. If something doesn't look 100% to the trained mathematicians in the room, please let me know, or just squint a little harder. ;)
We'll cover:
- Some common conjugate distributions
- An example of the Dirichlet-Multinomial distribution using dice rolls
- Two examples involing polling data from BDA3
Conjugate Distributions
In Chapter 2 of the book, the authors introduce several choices for prior probability distributions, along with the concept of conjugate distributions in section 2.4.
From Wikipedia
In Bayesian probability theory, if the posterior distributions p(θ | x) are in the same probability distribution family as the prior probability distribution p(θ), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function
John Cook has this helpful diagram on his website that shows some common families of conjugate distributions:
<img src=https://www.johndcook.com/conjugate_prior_diagram.png width="300">
Conjugate distributions are a very important concept in probability theory, owing to a large degree to some nice mathematical properties that make computing the posteriors more tractable. Even with increasingly better computational tools, such as MCMC, models based on conjugate distributions are advantageous.
Beta-Binomial
One of the better known examples of conjugate distributions is the Beta-Binomial distribution, which is often used to model series of coin flips (the ever present topic in posts about probability). While the $Binomial$ distribution represents the probability of success in a series of Bernoulli trials, the Beta distribution here represents the prior probability distribtution of the probability of success for each trial.
Thus, the probability $p$ of a coin landing on head is modeled to be $Beta$ distributed (with parameters $\alpha$ and $\beta$), while the likelihood of heads and tails is assumed to follow a $Binomial$ distribution with parameters $n$ (representing the number of flips) and the $Beta$ distributed $p$, thus creating the link.
$$p \sim Beta(\alpha, \beta)$$
$$y \sim Binomial(n, p)$$
Gamma-Poisson
Another often-used conjugate distribution is the Gamma-Poisson distribution, so named because the rate parameter $\lambda$ that parameterizes the Poisson distributed is modeled as a Gamma distribution:
$$\lambda \sim Gamma(k, \theta)$$
$$y \sim Poisson(\lambda)$$
While the discrete $Poisson$ distributed is often used in applications of count data, such as store customers, eCommerce orders, website visits, the $Gamma$ distribution serves as a useful distribution to model the rate at which these events occur ($\lambda$), since the $Gamma$ distribution models positive continuous values only but is otherwise quite flexible:
<img src=https://upload.wikimedia.org/wikipedia/commons/e/e6/Gamma_distribution_pdf.svg width="500">
Dirichlet-Multinomial
A perhaps more interesting and seemingly less talked-about example of conjugate distributions is the Dirichlet-Multinomial distribution, introduced in chapter 3 of BDA3.
One way of think about the $Dirichlet-Multinomial$ distribution is that while the $Multinomial$ (-> multiple choices) distribution is a generalization of the $Binomial$ distribution (-> binary choice), the $Dirichlet$ distribution is a generalization of the $Beta$ distribution. That is, while the $Beta$ distribution models the probability of a single probability $p$, the $Dirichlet$ models the probabilities of multiple, mutually exclusive choices, parameterized by $a$ which is referred to as the concentration parameter and represents the weights for each choice (we'll see more on that later).
In other words, think coins for $Beta-Binomial$ and dice for $Dirichlet-Multinomial$.
$$\theta \sim Dirichlet(a)$$
$$y \sim Multinomial(n, \theta)$$
In the wild, we might encounter the Dirichlet distribution these days mostly in the context of topic modeling in natural language processing, where it's commonly used as part of a Latent Dirichlet Allocation (or LDA) model, which is fancy way of saying we're trying to figure out the probability of an article belonging to a certain topic given its text.
However, for our purposes, let's look at the Dirichlet-Multinomial in the context of multiple choices, and let's start by throwing dice as a motivating example:
Throwing Dice
Let's first create some data representing 122 rolls of six-sided die, where $p$ represents the expected probability for each side, $1/6$
End of explanation
sns.barplot(x=np.arange(1, k+1), y=y);
Explanation: Just looking at a simple bar plot, we suspect that we might not be dealing with a fair die!
However, students of Bayesian statistics that we are, we'd like to go further and quantify our uncertainty in the fairness of the die and calculate the probability that someone slipped us loaded dice.
End of explanation
n, y
with pm.Model() as dice_model:
# initializes the Dirichlet distribution with a uniform prior:
a = np.ones(k)
theta = pm.Dirichlet("theta", a=a)
# Since theta[5] will hold the posterior probability of rolling a 6
# we'll compare this to the reference value p = 1/6
# six_bias = pm.Deterministic("six_bias", theta[k-1] - p)
results = pm.Multinomial("results", n=n, p=theta, observed=y)
dice_model
Explanation: Let's set up a simple model in PyMC3 that not only calculates the posterior probability for $theta$ (i.e. the probability for each side of the die), but also estimates the bias for throwing a $6$.
We will use Deterministic variable, in addition to our unobserved (theta) and observed (results) variables.
For the prior on $theta$, we'll use a non-informative uniform distribution, by initializing the $Dirichlet$ prior with a series of 1s for the parameter a, one for each of the k possible outcomes. This is similar to initializing a $Beta$ distribution as $Beta(1, 1)$, which corresponds to the Uniform distribution.
End of explanation
pm.model_to_graphviz(dice_model)
dice_model.check_test_point()
Explanation: Starting with version 3.5, PyMC3 includes a handy function to plot models in plate notation:
End of explanation
with dice_model:
dice_trace = pm.sample(draws=1000)
Explanation: Let's draw 1,000 samples from the joint posterior using the default NUTS sampler:
End of explanation
with dice_model:
pm.traceplot(dice_trace, combined=True, lines={"theta": p})
Explanation: From the traceplot, we can already see that one of the $theta$ posteriors isn't in line with the rest:
End of explanation
axes = pm.plot_posterior(dice_trace, varnames=["theta"], ref_val=np.round(p, 3))
for i, ax in enumerate(axes):
ax.set_title(f"{i+1}")
Explanation: We'll plot the posterior distributions for each $theta$ and compare it our reference value $p$ to see if the 95% HPD (Highest Posterior Density) interval includes $p = 1/6$.
End of explanation
ax = pm.plot_posterior(dice_trace, varnames=["six_bias"], ref_val=[0])
ax.set_title(f"P(Theta[Six] - {p:.2%})");
Explanation: We can clearly see that the HPD for the posterior probability for rolling a $6$ barely includes what we'd expect from a fair die.
To be more precise, let's plot the probability of our die being biased on $6$, by comparing $theta[Six]$ to $p$
End of explanation
six_bias_perc = len(dice_trace["six_bias"][dice_trace["six_bias"]>0])/len(dice_trace["six_bias"])
print(f'P(Six is biased) = {six_bias_perc:.2%}')
Explanation: Lastly, we can calculate the probability that the die is biased on $6$ by calculating the density to the right of our reference line at $0$:
End of explanation
y = np.asarray([727, 583, 137])
n = y.sum()
k = len(y)
n, k
Explanation: Better get some new dice...!
Polling #1
Let's turn our review of the Dirichlet-Multinomial distribution to another example, concerning polling data.
In section 3.4 of BDA3 on multivariate models and, specifically the section on Multinomial Models for Categorical Data, the authors include a, little dated, example of polling data in the 1988 Presidential race between George H.W. Bush and Michael Dukakis.
Here's the setup:
1,447 likely voters were surveyed about their preferences in the upcoming presidential election
Their responses were:
Bush: 727
Dukakis: 583
Other: 137
What is the probability that more people will vote for Bush over Dukakis?
i.e. what is the difference in support for the two major candidates?
We set up the data, where $k$ represents the number of choices the respondents had:
End of explanation
with pm.Model() as polling_model:
# initializes the Dirichlet distribution with a uniform prior:
a = np.ones(k)
theta = pm.Dirichlet("theta", a=a)
bush_dukakis_diff = pm.Deterministic("bush_dukakis_diff", theta[0] - theta[1])
likelihood = pm.Multinomial("likelihood", n=n, p=theta, observed=y)
pm.model_to_graphviz(polling_model)
with polling_model:
polling_trace = pm.sample(draws=1000)
with polling_model:
pm.traceplot(polling_trace, combined=True)
Explanation: We, again, set up a simple Dirichlet-Multinomial model and include a Deterministic variable that calculates the metric of interest - the difference in probability of respondents for Bush vs. Dukakis.
End of explanation
_, ax = plt.subplots(1,1, figsize=(10, 6))
sns.distplot(polling_trace["bush_dukakis_diff"], bins=20, ax=ax, kde=False, fit=stats.beta)
ax.axvline(0, c='g', linestyle='dotted')
ax.set_title("% Difference Bush vs Dukakis")
ax.set_xlabel("% Difference");
Explanation: Looking at the % difference between respondents for Bush vs Dukakis, we can see that most of the density is greater than 0%, signifying a strong advantage for Bush in this poll.
We've also fit a $Beta$ distribution to this data via scipy.stats, and we can see that posterior of the difference of the 2 $theta$ values is a pretty good match.
End of explanation
bush_dukakis_diff_perc = len(polling_trace["bush_dukakis_diff"][polling_trace["bush_dukakis_diff"]>0])/len(polling_trace["bush_dukakis_diff"])
print(f'P(More Responses for Bush) = {bush_dukakis_diff_perc:.0%}')
Explanation: Percentage of samples with bush_dukakis_diff > 0:
End of explanation
data = pd.DataFrame([
{"candidate": "bush", "pre": 294, "post": 288},
{"candidate": "dukakis", "pre": 307, "post": 332},
{"candidate": "other", "pre": 38, "post": 10}
], columns=["candidate", "pre", "post"])
data
Explanation: Polling #2
As an extension to the previous model, the authors of BDA include an exercise in chapter 3.10 (Exercise 2) that presents us with polling data from the 1988 Presidential race, taking before and after the one of the debates.
Comparison of two multinomial observations: on September 25, 1988, the evening of a
presidential campaign debate, ABC News conducted a survey of registered voters in the
United States; 639 persons were polled before the debate, and 639 different persons were
polled after. The results are displayed in Table 3.2. Assume the surveys are independent
simple random samples from the population of registered voters. Model the data with
two different multinomial distributions. For $j = 1, 2$, let $\alpha_j$ be the proportion of voters
who preferred Bush, out of those who had a preference for either Bush or Dukakis at
the time of survey $j$. Plot a histogram of the posterior density for $\alpha_2 − \alpha_1$. What is the
posterior probability that there was a shift toward Bush?
Let's copy the data from the exercise and model the problem as a probabilistic model, again using PyMC3:
End of explanation
y = data[["pre", "post"]].T.values
y
Explanation: Convert to 2x3 array
End of explanation
n = y.sum(axis=1)
n
Explanation: Number of respondents in each survey
End of explanation
m = y[:, :2].sum(axis=1)
m
Explanation: Number of respondents for the 2 major candidates in each survey
End of explanation
n_debates, n_candidates = y.shape
n_debates, n_candidates
Explanation: For this model, we'll need to set up the priors slightly differently. Instead of 1 set of thetas, we need 2, one for each survey (pre/post debate).
To do that without creating specific pre/post versions of each variable, we'll take advantage of PyMC3's shape parameter, available for most (all?) distributions.
In this case, we'll need a 2-dimensional shape parameter, representing the number of debates n_debates and the number of choices in candidates n_candidates
End of explanation
with pm.Model() as polling_model_debates:
# initializes the Dirichlet distribution with a uniform prior:
shape = (n_debates, n_candidates)
a = np.ones(shape)
# This creates a separate Dirichlet distribution for each debate
# where sum of probabilities across candidates = 100% for each debate
theta = pm.Dirichlet("theta", a=a, shape=shape)
# get the "Bush" theta for each debate, at index=0
bush_pref = pm.Deterministic("bush_pref", theta[:, 0] * n / m)
# to calculate probability that support for Bush shifted from debate 1 [0] to 2 [1]
bush_shift = pm.Deterministic("bush_shift", bush_pref[1]-bush_pref[0])
# because of the shapes of the inputs, this essentially creates 2 multinomials,
# one for each debate
responses = pm.Multinomial("responses", n=n, p=theta, observed=y)
Explanation: Thus, we need to initialize a Dirichlet distribution prior with shape (2,3) and then refer to the relevant parameters by index where needed.
End of explanation
for v in polling_model_debates.unobserved_RVs:
print(v, v.tag.test_value.shape)
Explanation: For models with multi-dimensional shapes, it's always good to check the shapes of the various parameters before sampling:
End of explanation
pm.model_to_graphviz(polling_model_debates)
Explanation: The plate notation visual can also help with that:
End of explanation
with polling_model_debates:
polling_trace_debates = pm.sample(draws=3000, tune=1500)
with polling_model_debates:
pm.traceplot(polling_trace_debates, combined=True)
Explanation: Let's sample with a slightly higher number of draws and tuning steps:
End of explanation
s = ["pre", "post"]
candidates = data["candidate"].values
pd.DataFrame(polling_trace_debates["theta"].mean(axis=0), index=s, columns=candidates)
Explanation: We'll take a look at the means of the posteriors for theta, indicating the % of support for each candidate pre & post debate:
End of explanation
pd.DataFrame(polling_trace_debates["bush_pref"].mean(axis=0), index=s, columns=["bush_pref"])
Explanation: Just from the means, we can see that the number of Bush supporters has likely decreased post debate from 48.8% to 46.3% (as a % of supporters of the 2 major candidates):
End of explanation
_, ax = plt.subplots(2,1, figsize=(10, 10))
sns.distplot(polling_trace_debates["bush_pref"][:,0], hist=False, ax=ax[0], label="Pre-Debate")
sns.distplot(polling_trace_debates["bush_pref"][:,1], hist=False, ax=ax[0], label="Post-Debate")
ax[0].set_title("% Responses for Bush vs Dukakis")
ax[0].set_xlabel("% Responses");
sns.distplot(polling_trace_debates["bush_shift"], hist=True, ax=ax[1], label="P(Bush Shift)")
ax[1].axvline(0, c='g', linestyle='dotted')
ax[1].set_title("% Shift Pre/Prior Debate")
ax[1].set_xlabel("% Shift");
Explanation: Let's compare the results visually, by plotting the posterior distributions of the pre/post debate values for % responses for Bush and the posterior for pre/post difference in Bush supporters:
End of explanation
perc_shift = (len(polling_trace_debates["bush_shift"][polling_trace_debates["bush_shift"] > 0])
/len(polling_trace_debates["bush_shift"])
)
print(f'P(Shift Towards Bush) = {perc_shift:.1%}')
Explanation: From the second plot, we can already see that a large portion of the posterior density is below 0, but let's be precise and actually calculate the probability that support shifted towards Bush after the debate:
End of explanation |
8,407 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
10 For-Loop-Rückblick-Übungen
In den Teilen der folgenden Übungen habe ich den Code mit "XXX" ausgewechselt. Es gilt in allen Übungen, den korrekten Code auszuführen und die Zelle dann auszuführen.
1.Drucke alle diese Prim-Zahlen aus
Step1: 2.Drucke alle die Zahlen von 0 bis 4 aus
Step2: 4.Baue einen For-Loop, indem Du alle geraden Zahlen ausdruckst, die tiefer sind als 237.
Step3: 5.Addiere alle Zahlen in der Liste
Step4: 6.Addiere nur die Zahlen, die gerade sind
Step5: 7.Drucke mit einem For Loop 5 Mal hintereinander Hello World aus
Step6: 8.Entwickle ein Programm, das alle Nummern zwischen 2000 und 3200 findet, die durch 7, aber nicht durch 5 teilbar sind. Das Ergebnis sollte auf einer Zeile ausgedruckt werden. Tipp
Step7: 9.Schreibe einen For Loop, der die Nummern in der folgenden Liste von int in str verwandelt.
Step8: 10.Schreibe nun ein Programm, das alle Ziffern 4 mit dem Buchstaben A ersetzte, alle Ziffern 5 mit dem Buchtaben B. | Python Code:
primzweibissieben = [2, 3, 5, 7]
for prime in primzweibissieben:
print(prime)
Explanation: 10 For-Loop-Rückblick-Übungen
In den Teilen der folgenden Übungen habe ich den Code mit "XXX" ausgewechselt. Es gilt in allen Übungen, den korrekten Code auszuführen und die Zelle dann auszuführen.
1.Drucke alle diese Prim-Zahlen aus:
End of explanation
for x in range(5):
print(x)
for x in range(3, 6):
print(x)
Explanation: 2.Drucke alle die Zahlen von 0 bis 4 aus:
End of explanation
numbers = [
951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544,
615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941,
386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345,
399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, 950, 626, 949, 687, 217,
815, 67, 104, 58, 512, 24, 892, 894, 767, 553, 81, 379, 843, 831, 445, 742, 717,
958, 609, 842, 451, 688, 753, 854, 685, 93, 857, 440, 380, 126, 721, 328, 753, 470,
743, 527
]
# Hier kommt Dein Code:
new_lst = []
for elem in numbers:
if elem < 238 and elem % 2 == 0:
new_lst.append(elem)
else:
continue
print(new_lst)
#Lösung:
Explanation: 4.Baue einen For-Loop, indem Du alle geraden Zahlen ausdruckst, die tiefer sind als 237.
End of explanation
sum(numbers)
#Lösung:
Explanation: 5.Addiere alle Zahlen in der Liste
End of explanation
evennumber = []
for elem in numbers:
if elem % 2 == 0:
evennumber.append(elem)
sum(evennumber)
Explanation: 6.Addiere nur die Zahlen, die gerade sind
End of explanation
Satz = ['Hello World', 'Hello World','Hello World','Hello World','Hello World']
for elem in Satz:
print(elem)
#Lösung
Explanation: 7.Drucke mit einem For Loop 5 Mal hintereinander Hello World aus
End of explanation
l=[]
for i in range(2000, 3201):
if (i % 7==0) and (i % 5!=0):
l.append(str(i))
print(','.join(l))
Explanation: 8.Entwickle ein Programm, das alle Nummern zwischen 2000 und 3200 findet, die durch 7, aber nicht durch 5 teilbar sind. Das Ergebnis sollte auf einer Zeile ausgedruckt werden. Tipp: Schaue Dir hier die Vergleichsoperanden von Python an.
End of explanation
lst = range(45,99)
newlst = []
for i in lst:
i = str(i)
newlst.append(i)
print(newlst)
Explanation: 9.Schreibe einen For Loop, der die Nummern in der folgenden Liste von int in str verwandelt.
End of explanation
newnewlist = []
for elem in newlst:
if '4' in elem:
elem = elem.replace('4', 'A')
if '5' in elem:
elem = elem.replace('5', 'B')
newnewlist.append(elem)
newnewlist
Explanation: 10.Schreibe nun ein Programm, das alle Ziffern 4 mit dem Buchstaben A ersetzte, alle Ziffern 5 mit dem Buchtaben B.
End of explanation |
8,408 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercici de navegació
<span title="Roomba navigating around furniture"><img src="img/roomba.jpg" align="right" width=200></span>
Un robot mòbil com el Roomba de la imatge ha d'evitar xocar amb els obstacles del seu entorn, i si arriba a col·lisionar, ha de reaccionar per a no fer, ni fer-se mal.
Amb el sensor de tacte no podem evitar el xoc, però si detectar-lo un cop es produeix, i reaccionar.
L'objectiu d'aquest exercici és programar el següent comportament en el robot
Step1: Versió 1.0
Utilitzeu el codi de l'exemple anterior del bucle while
Step2: Versió 2.0
Se suposa que la maniobra del robot li permet evitar l'obstacle, i per tant tornar a anar cap avant. Com ho podem programar?
Cal repetir tot el bloc d'instruccions del comportament, incloent el bucle. Cap problema, els llenguatges de programació permeten posar un bucle dins d'un altre, el que s'anomena bucles anidats.
Utilitzeu un bucle for per a repetir 5 vegades el codi anterior.
Step3: Versió 3.0
<img src="img/interrupt.png" align="right">
I si en lloc de repetir 10 o 20 vegades, volem que el robot continue fins que el parem nosaltres? Ho podem fer amb un bucle infinit, i indicarem al programa que pare amb el botó interrupt kernel.
En Python, un bucle infinit s'escriu així
Step4: Versió 4.0
El comportament del robot, girant sempre cap al mateix costat, és una mica previsible, no vos sembla?
Anem a introduir un component d'atzar
Step5: La funció random és com llançar un dau, però en compte de donar una valor d'1 a 6, dóna un número real entre 0 i 1.
Aleshores, el robot pot utilitzar eixe valor per a decidir si gira a esquerra o dreta. Com? Doncs si el valor és major que 0.5, gira a un costat, i si no, cap a l'altre. Aleshores, girarà a l'atzar, amb una probabilitat del 50% per a cada costat.
Incorporeu la decisió a l'atzar per a girar al codi de la versió anterior
Step6: Recapitulem
Abans de continuar, desconnecteu el robot | Python Code:
from functions import connect, touch, forward, backward, left, right, stop, disconnect
from time import sleep
connect()
Explanation: Exercici de navegació
<span title="Roomba navigating around furniture"><img src="img/roomba.jpg" align="right" width=200></span>
Un robot mòbil com el Roomba de la imatge ha d'evitar xocar amb els obstacles del seu entorn, i si arriba a col·lisionar, ha de reaccionar per a no fer, ni fer-se mal.
Amb el sensor de tacte no podem evitar el xoc, però si detectar-lo un cop es produeix, i reaccionar.
L'objectiu d'aquest exercici és programar el següent comportament en el robot:
mentre no detecte res, el robot va cap avant
si el sensor detecta un xoc, el robot anirà cap enrere i girarà
Connecteu el robot:
End of explanation
while not touch():
forward()
backward()
sleep(1)
left()
sleep(1)
stop()
Explanation: Versió 1.0
Utilitzeu el codi de l'exemple anterior del bucle while: només heu d'afegir que, quan xoque, el robot vaja cap enrere, gire una mica (cap al vostre costat preferit), i pare.
End of explanation
for ...:
while ...:
...
...
for i in range(5):
while not touch():
forward()
backward()
sleep(1)
left()
sleep(1)
stop()
Explanation: Versió 2.0
Se suposa que la maniobra del robot li permet evitar l'obstacle, i per tant tornar a anar cap avant. Com ho podem programar?
Cal repetir tot el bloc d'instruccions del comportament, incloent el bucle. Cap problema, els llenguatges de programació permeten posar un bucle dins d'un altre, el que s'anomena bucles anidats.
Utilitzeu un bucle for per a repetir 5 vegades el codi anterior.
End of explanation
try:
while True:
while not touch():
forward()
backward()
sleep(1)
left()
sleep(1)
except KeyboardInterrupt:
stop()
Explanation: Versió 3.0
<img src="img/interrupt.png" align="right">
I si en lloc de repetir 10 o 20 vegades, volem que el robot continue fins que el parem nosaltres? Ho podem fer amb un bucle infinit, i indicarem al programa que pare amb el botó interrupt kernel.
En Python, un bucle infinit s'escriu així:
python
while True:
statement
Quan s'interromp el programa, s'abandona la instrucció que s'estava executant en eixe moment, i cal parar el robot. En Python, aquest procés s'anomena excepció i es gestiona d'aquesta manera:
python
try:
while True:
statement # ací anirà el comportament
except KeyboardInterrupt:
statement # ací pararem el robot
Utilitzeu un bucle infinit per a repetir el comportament del robot fins que el pareu.
End of explanation
from random import random
random()
Explanation: Versió 4.0
El comportament del robot, girant sempre cap al mateix costat, és una mica previsible, no vos sembla?
Anem a introduir un component d'atzar: en els llenguatges de programació, existeixen els generadors de números aleatoris, que són com els daus dels ordinadors.
Executeu el següent codi vàries vegades amb Ctrl+Enter i comproveu els resultats.
End of explanation
try:
while True:
while not touch():
forward()
backward()
sleep(1)
if random() > 0.5:
left()
else:
right()
sleep(1)
except KeyboardInterrupt:
stop()
Explanation: La funció random és com llançar un dau, però en compte de donar una valor d'1 a 6, dóna un número real entre 0 i 1.
Aleshores, el robot pot utilitzar eixe valor per a decidir si gira a esquerra o dreta. Com? Doncs si el valor és major que 0.5, gira a un costat, i si no, cap a l'altre. Aleshores, girarà a l'atzar, amb una probabilitat del 50% per a cada costat.
Incorporeu la decisió a l'atzar per a girar al codi de la versió anterior:
End of explanation
disconnect()
Explanation: Recapitulem
Abans de continuar, desconnecteu el robot:
End of explanation |
8,409 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Denoising using a TV-2 filter
In this demo we show how to use the ISS filter described in Nonlinear inverse scale space methods
M Burger, G Gilboa, S Osher, J Xu - Communications in Mathematical Sciences, 2006
Step1: Make some test data
The test data is a cubic volume with an inner cube set to one surrounded by zeros. Gaussian noise is added to the image. It is important that the data has the type float32.
Step2: The cube has the dimensions 40x40x40 voxels and the StdDev of the noise is 0.5.
Step3: The filter operates inplace, therefore we make a deep copy of the image to be able to compare the performance of the filter. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import advancedfilters as af
Explanation: Denoising using a TV-2 filter
In this demo we show how to use the ISS filter described in Nonlinear inverse scale space methods
M Burger, G Gilboa, S Osher, J Xu - Communications in Mathematical Sciences, 2006
End of explanation
def buildTestVolume(size,sigma) :
vol = np.zeros([size,size,size])
margin = size // 4
vol[margin:-margin,margin:-margin,margin:-margin]=1
vol = vol + np.random.normal(0,1,size=vol.shape)*sigma
return vol.astype('float32')
Explanation: Make some test data
The test data is a cubic volume with an inner cube set to one surrounded by zeros. Gaussian noise is added to the image. It is important that the data has the type float32.
End of explanation
vol = buildTestVolume(40,0.5)
fig,ax = plt.subplots(1,2,figsize=[12,5])
ax[0].imshow(vol[20]);
ax[1].hist(vol.ravel(),bins=256);
Explanation: The cube has the dimensions 40x40x40 voxels and the StdDev of the noise is 0.5.
End of explanation
fvol = vol.copy()
iss=af.ISSfilter3D()
iss.setInitialImageType(af.InitialImageOriginal)
iss.setRegularizationType(af.RegularizationTV2)
fvol = vol.copy()
# Normalize
m = vol.mean()
s = vol.std()
fvol = (fvol-m)/s
# Run the filter inplace
iss.process(fvol,tau=0.125,plambda=1,palpha=0.25,N=10)
# Rescale
fvol = s*fvol + m
error = iss.errors()
fig,ax=plt.subplots(2,3,figsize=[15,10])
ax = ax.ravel()
ax[0].imshow(vol[20],cmap='gray');
ax[1].imshow(fvol[20],cmap='gray')
ax[2].axis('off')
ax[3].imshow(vol[20]-fvol[20]); ax[3].set_title('Original-Filtered')
ax[4].plot(error); ax[4].set_title('MSE of the iterations');
ax[5].hist(vol.ravel(), bins=256,label='Original');
ax[5].hist(fvol.ravel(), bins=256,label='Filtered');
ax[5].legend();
ax[5].set_title('Histograms of the images');
Explanation: The filter operates inplace, therefore we make a deep copy of the image to be able to compare the performance of the filter.
End of explanation |
8,410 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is developed as part of the KIPAC/StatisticalMethods course, (c) 2019 Adam Mantz, licensed under the GPLv2.
What's the deal with REPLACE_WITH_YOUR_SOLUTION?
Tutorial notebooks from KIPAC/StatisticalMethods will all start with these definitions
Step1: You'll then see cells that look something like this
Step2: Go ahead and try to run it. You'll get an error traceback, the end of which points out that you've neglected to provide a solution to the posed problem. This is our preferred method of providing incomplete code, since an alternative like
python
x = # set x equal to something
will throw a different and less informative error if you accidentally run the cell before completing it.
The try-except wrapper is there so that we, the developers, can easily verify that the entire notebook runs if provided with a correct solution. There is no need for you to write solutions for each cell in separate files, and doing so will just make this notebook harder for you to use later. Instead, we suggest removing the try-except construction entirely, so your completed notebook cell would look like
Step3: You'll also see cells in this format | Python Code:
class SolutionMissingError(Exception):
def __init__(self):
Exception.__init__(self,"You need to complete the solution for this code to work!")
def REPLACE_WITH_YOUR_SOLUTION():
raise SolutionMissingError
REMOVE_THIS_LINE = REPLACE_WITH_YOUR_SOLUTION
Explanation: This notebook is developed as part of the KIPAC/StatisticalMethods course, (c) 2019 Adam Mantz, licensed under the GPLv2.
What's the deal with REPLACE_WITH_YOUR_SOLUTION?
Tutorial notebooks from KIPAC/StatisticalMethods will all start with these definitions:
End of explanation
# Set x equal to something
try:
exec(open('solutions/setx.py').read())
except IOError:
x = REPLACE_WITH_YOUR_SOLUTION()
Explanation: You'll then see cells that look something like this:
End of explanation
# Set x equal to something
x = 5.0
Explanation: Go ahead and try to run it. You'll get an error traceback, the end of which points out that you've neglected to provide a solution to the posed problem. This is our preferred method of providing incomplete code, since an alternative like
python
x = # set x equal to something
will throw a different and less informative error if you accidentally run the cell before completing it.
The try-except wrapper is there so that we, the developers, can easily verify that the entire notebook runs if provided with a correct solution. There is no need for you to write solutions for each cell in separate files, and doing so will just make this notebook harder for you to use later. Instead, we suggest removing the try-except construction entirely, so your completed notebook cell would look like
End of explanation
# Define a function that does stuff
try:
exec(open('solutions/func.py').read())
except IOError:
REMOVE_THIS_LINE()
def myfunc(a, b):
c = REPLACE_WITH_YOUR_SOLUTION()
return REPLACE_WITH_YOUR_SOLUTION()
Explanation: You'll also see cells in this format:
End of explanation |
8,411 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Сравнение метрик качества бинарной классификации
Programming Assignment
В этом задании мы разберемся, в чем состоит разница между разными метриками качества. Мы остановимся на задаче бинарной классификации (с откликами 0 и 1), но рассмотрим ее как задачу предсказания вероятности того, что объект принадлежит классу 1. Таким образом, мы будем работать с вещественной, а не бинарной целевой переменной.
Задание оформлено в стиле демонстрации с элементами Programming Assignment. Вам нужно запустить уже написанный код и рассмотреть предложенные графики, а также реализовать несколько своих функций. Для проверки запишите в отдельные файлы результаты работы этих функций на указанных наборах входных данных, это можно сделать с помощью предложенных в заданиях функций write_answer_N, N - номер задачи. Загрузите эти файлы в систему.
Для построения графиков нужно импортировать соответствующие модули.
Библиотека seaborn позволяет сделать графики красивее. Если вы не хотите ее использовать, закомментируйте третью строку.
Более того, для выполнения Programming Assignment модули matplotlib и seaborn не нужны (вы можете не запускать ячейки с построением графиков и смотреть на уже построенные картинки).
Step1: Что предсказывают алгоритмы
Для вычисления метрик качества в обучении с учителем нужно знать только два вектора
Step2: Идеальная ситуация
Step3: Интервалы вероятностей для двух классов прекрасно разделяются порогом T = 0.5.
Чаще всего интервалы накладываются - тогда нужно аккуратно подбирать порог.
Самый неправильный алгоритм делает все наоборот
Step4: Алгоритм может быть осторожным и стремиться сильно не отклонять вероятности от 0.5, а может рисковать - делать предсказания близакими к нулю или единице.
Step5: Также интервалы могут смещаться. Если алгоритм боится ошибок false positive, то он будет чаще делать предсказания, близкие к нулю.
Аналогично, чтобы избежать ошибок false negative, логично чаще предсказывать большие вероятности.
Step6: Мы описали разные характеры векторов вероятностей. Далее мы будем смотреть, как метрики оценивают разные векторы предсказаний, поэтому обязательно выполните ячейки, создающие векторы для визуализации.
Метрики, оценивающие бинарные векторы предсказаний
Есть две типичные ситуации, когда специалисты по машинному обучению начинают изучать характеристики метрик качества
Step7: Все три метрики легко различают простые случаи хороших и плохих алгоритмов. Обратим внимание, что метрики имеют область значений [0, 1], и потому их легко интерпретировать.
Метрикам не важны величины вероятностей, им важно только то, сколько объектов неправильно зашли за установленную границу (в данном случае T = 0.5).
Метрика accuracy дает одинаковый вес ошибкам false positive и false negative, зато пара метрик precision и recall однозначно идентифицирует это различие. Собственно, их для того и используют, чтобы контролировать ошибки FP и FN.
Мы измерили три метрики, фиксировав порог T = 0.5, потому что для почти всех картинок он кажется оптимальным. Давайте посмотрим на последней (самой интересной для этих метрик) группе векторов, как меняются precision и recall при увеличении порога.
Step8: При увеличении порога мы делаем меньше ошибок FP и больше ошибок FN, поэтому одна из кривых растет, а вторая - падает. По такому графику можно подобрать оптимальное значение порога, при котором precision и recall будут приемлемы. Если такого порога не нашлось, нужно обучать другой алгоритм.
Оговоримся, что приемлемые значения precision и recall определяются предметной областью. Например, в задаче определения, болен ли пациент определенной болезнью (0 - здоров, 1 - болен), ошибок false negative стараются избегать, требуя recall около 0.9. Можно сказать человеку, что он болен, и при дальнейшей диагностике выявить ошибку; гораздо хуже пропустить наличие болезни.
<font color="green" size=5>Programming assignment
Step9: F1-score
Очевидный недостаток пары метрик precision-recall - в том, что их две
Step10: F1-метрика в двух последних случаях, когда одна из парных метрик равна 1, значительно меньше, чем в первом, сбалансированном случае.
<font color="green" size=5>Programming assignment
Step11: Метрики, оценивающие векторы вероятностей класса 1
Рассмотренные метрики удобно интерпретировать, но при их использовании мы не учитываем большую часть информации, полученной от алгоритма. В некоторых задачах вероятности нужны в чистом виде, например, если мы предсказываем, выиграет ли команда в футбольном матче, и величина вероятности влияет на размер ставки за эту команду. Даже если в конце концов мы все равно бинаризуем предсказание, хочется следить за характером вектора вероятности.
Log_loss
Log_loss вычисляет правдоподобие меток в actual с вероятностями из predicted, взятое с противоположным знаком
Step12: Как и предыдущие метрики, log_loss хорошо различает идеальный, типичный и плохой случаи. Но обратите внимание, что интерпретировать величину достаточно сложно
Step13: Обратите внимание на разницу weighted_log_loss между случаями Avoids FP и Avoids FN.
ROC и AUC
При построении ROC-кривой (receiver operating characteristic) происходит варьирование порога бинаризации вектора вероятностей, и вычисляются величины, зависящие от числа ошибок FP и FN. Эти величины задаются так, чтобы в случае, когда существует порог для идеального разделения классов, ROC-кривая проходила через определенную точку - верхний левый угол квадрата [0, 1] x [0, 1]. Кроме того, она всегда проходит через левый нижний и правый верхний углы. Получается наглядная визуализация качества алгоритма. С целью охарактеризовать эту визуализацию численно, ввели понятие AUC - площадь под ROC-кривой.
Есть несложный и эффективный алгоритм, который за один проход по выборке вычисляет ROC-кривую и AUC, но мы не будем вдаваться в детали.
Построим ROC-кривые для наших задач
Step14: Чем больше объектов в выборке, тем более гладкой выглядит кривая (хотя на самом деле она все равно ступенчатая).
Как и ожидалось, кривые всех идеальных алгоритмов проходят через левый верхний угол. На первом графике также показана типичная ROC-кривая (обычно на практике они не доходят до "идеального" угла).
AUC рискующего алгоритма значительном меньше, чем у осторожного, хотя осторожный и рискущий идеальные алгоритмы не различаются по ROC или AUC. Поэтому стремиться увеличить зазор между интервалами вероятностей классов смысла не имеет.
Наблюдается перекос кривой в случае, когда алгоритму свойственны ошибки FP или FN. Однако по величине AUC это отследить невозможно (кривые могут быть симметричны относительно диагонали (0, 1)-(1, 0)).
После того, как кривая построена, удобно выбирать порог бинаризации, в котором будет достигнут компромисс между FP или FN. Порог соответствует точке на кривой. Если мы хотим избежать ошибок FP, нужно выбирать точку на левой стороне квадрата (как можно выше), если FN - точку на верхней стороне квадрата (как можно левее). Все промежуточные точки будут соответствовать разным пропорциям FP и FN.
<font color="green" size=5>Programming assignment | Python Code:
import numpy as np
from matplotlib import pyplot as plt
import seaborn
%matplotlib inline
Explanation: Сравнение метрик качества бинарной классификации
Programming Assignment
В этом задании мы разберемся, в чем состоит разница между разными метриками качества. Мы остановимся на задаче бинарной классификации (с откликами 0 и 1), но рассмотрим ее как задачу предсказания вероятности того, что объект принадлежит классу 1. Таким образом, мы будем работать с вещественной, а не бинарной целевой переменной.
Задание оформлено в стиле демонстрации с элементами Programming Assignment. Вам нужно запустить уже написанный код и рассмотреть предложенные графики, а также реализовать несколько своих функций. Для проверки запишите в отдельные файлы результаты работы этих функций на указанных наборах входных данных, это можно сделать с помощью предложенных в заданиях функций write_answer_N, N - номер задачи. Загрузите эти файлы в систему.
Для построения графиков нужно импортировать соответствующие модули.
Библиотека seaborn позволяет сделать графики красивее. Если вы не хотите ее использовать, закомментируйте третью строку.
Более того, для выполнения Programming Assignment модули matplotlib и seaborn не нужны (вы можете не запускать ячейки с построением графиков и смотреть на уже построенные картинки).
End of explanation
# рисует один scatter plot
def scatter(actual, predicted, T):
plt.scatter(actual, predicted)
plt.xlabel("Labels")
plt.ylabel("Predicted probabilities")
plt.plot([-0.2, 1.2], [T, T])
plt.axis([-0.1, 1.1, -0.1, 1.1])
# рисует несколько scatter plot в таблице, имеющей размеры shape
def many_scatters(actuals, predicteds, Ts, titles, shape):
plt.figure(figsize=(shape[1]*5, shape[0]*5))
i = 1
for actual, predicted, T, title in zip(actuals, predicteds, Ts, titles):
ax = plt.subplot(shape[0], shape[1], i)
ax.set_title(title)
i += 1
scatter(actual, predicted, T)
Explanation: Что предсказывают алгоритмы
Для вычисления метрик качества в обучении с учителем нужно знать только два вектора: вектор правильных ответов и вектор предсказанных величин; будем обозначать их actual и predicted. Вектор actual известен из обучающей выборки, вектор predicted возвращается алгоритмом предсказания. Сегодня мы не будем использовать какие-то алгоритмы классификации, а просто рассмотрим разные векторы предсказаний.
В нашей формулировке actual состоит из нулей и единиц, а predicted - из величин из интервала [0, 1] (вероятности класса 1). Такие векторы удобно показывать на scatter plot.
Чтобы сделать финальное предсказание (уже бинарное), нужно установить порог T: все объекты, имеющие предсказание выше порога, относят к классу 1, остальные - к классу 0.
End of explanation
actual_0 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_0 = np.array([ 0.19015288, 0.23872404, 0.42707312, 0.15308362, 0.2951875 ,
0.23475641, 0.17882447, 0.36320878, 0.33505476, 0.202608 ,
0.82044786, 0.69750253, 0.60272784, 0.9032949 , 0.86949819,
0.97368264, 0.97289232, 0.75356512, 0.65189193, 0.95237033,
0.91529693, 0.8458463 ])
plt.figure(figsize=(5, 5))
scatter(actual_0, predicted_0, 0.5)
Explanation: Идеальная ситуация: существует порог T, верно разделяющий вероятности, соответствующие двум классам. Пример такой ситуации:
End of explanation
actual_1 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1.])
predicted_1 = np.array([ 0.41310733, 0.43739138, 0.22346525, 0.46746017, 0.58251177,
0.38989541, 0.43634826, 0.32329726, 0.01114812, 0.41623557,
0.54875741, 0.48526472, 0.21747683, 0.05069586, 0.16438548,
0.68721238, 0.72062154, 0.90268312, 0.46486043, 0.99656541,
0.59919345, 0.53818659, 0.8037637 , 0.272277 , 0.87428626,
0.79721372, 0.62506539, 0.63010277, 0.35276217, 0.56775664])
actual_2 = np.array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
predicted_2 = np.array([ 0.07058193, 0.57877375, 0.42453249, 0.56562439, 0.13372737,
0.18696826, 0.09037209, 0.12609756, 0.14047683, 0.06210359,
0.36812596, 0.22277266, 0.79974381, 0.94843878, 0.4742684 ,
0.80825366, 0.83569563, 0.45621915, 0.79364286, 0.82181152,
0.44531285, 0.65245348, 0.69884206, 0.69455127])
many_scatters([actual_0, actual_1, actual_2], [predicted_0, predicted_1, predicted_2],
[0.5, 0.5, 0.5], ["Perfect", "Typical", "Awful algorithm"], (1, 3))
Explanation: Интервалы вероятностей для двух классов прекрасно разделяются порогом T = 0.5.
Чаще всего интервалы накладываются - тогда нужно аккуратно подбирать порог.
Самый неправильный алгоритм делает все наоборот: поднимает вероятности класса 0 выше вероятностей класса 1. Если так произошло, стоит посмотреть, не перепутались ли метки 0 и 1 при создании целевого вектора из сырых данных.
Примеры:
End of explanation
# рискующий идеальный алгоитм
actual_0r = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_0r = np.array([ 0.23563765, 0.16685597, 0.13718058, 0.35905335, 0.18498365,
0.20730027, 0.14833803, 0.18841647, 0.01205882, 0.0101424 ,
0.10170538, 0.94552901, 0.72007506, 0.75186747, 0.85893269,
0.90517219, 0.97667347, 0.86346504, 0.72267683, 0.9130444 ,
0.8319242 , 0.9578879 , 0.89448939, 0.76379055])
# рискующий хороший алгоритм
actual_1r = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_1r = np.array([ 0.13832748, 0.0814398 , 0.16136633, 0.11766141, 0.31784942,
0.14886991, 0.22664977, 0.07735617, 0.07071879, 0.92146468,
0.87579938, 0.97561838, 0.75638872, 0.89900957, 0.93760969,
0.92708013, 0.82003675, 0.85833438, 0.67371118, 0.82115125,
0.87560984, 0.77832734, 0.7593189, 0.81615662, 0.11906964,
0.18857729])
many_scatters([actual_0, actual_1, actual_0r, actual_1r],
[predicted_0, predicted_1, predicted_0r, predicted_1r],
[0.5, 0.5, 0.5, 0.5],
["Perfect careful", "Typical careful", "Perfect risky", "Typical risky"],
(2, 2))
Explanation: Алгоритм может быть осторожным и стремиться сильно не отклонять вероятности от 0.5, а может рисковать - делать предсказания близакими к нулю или единице.
End of explanation
actual_10 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1.])
predicted_10 = np.array([ 0.29340574, 0.47340035, 0.1580356 , 0.29996772, 0.24115457, 0.16177793,
0.35552878, 0.18867804, 0.38141962, 0.20367392, 0.26418924, 0.16289102,
0.27774892, 0.32013135, 0.13453541, 0.39478755, 0.96625033, 0.47683139,
0.51221325, 0.48938235, 0.57092593, 0.21856972, 0.62773859, 0.90454639, 0.19406537,
0.32063043, 0.4545493 , 0.57574841, 0.55847795 ])
actual_11 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_11 = np.array([ 0.35929566, 0.61562123, 0.71974688, 0.24893298, 0.19056711, 0.89308488,
0.71155538, 0.00903258, 0.51950535, 0.72153302, 0.45936068, 0.20197229, 0.67092724,
0.81111343, 0.65359427, 0.70044585, 0.61983513, 0.84716577, 0.8512387 ,
0.86023125, 0.7659328 , 0.70362246, 0.70127618, 0.8578749 , 0.83641841,
0.62959491, 0.90445368])
many_scatters([actual_1, actual_10, actual_11], [predicted_1, predicted_10, predicted_11],
[0.5, 0.5, 0.5], ["Typical", "Avoids FP", "Avoids FN"], (1, 3))
Explanation: Также интервалы могут смещаться. Если алгоритм боится ошибок false positive, то он будет чаще делать предсказания, близкие к нулю.
Аналогично, чтобы избежать ошибок false negative, логично чаще предсказывать большие вероятности.
End of explanation
from sklearn.metrics import precision_score, recall_score, accuracy_score
T = 0.5
print "Алгоритмы, разные по качеству:"
for actual, predicted, descr in zip([actual_0, actual_1, actual_2],
[predicted_0 > T, predicted_1 > T, predicted_2 > T],
["Perfect:", "Typical:", "Awful:"]):
print descr, "precision =", precision_score(actual, predicted), "recall =", \
recall_score(actual, predicted), ";",\
"accuracy =", accuracy_score(actual, predicted)
print
print "Осторожный и рискующий алгоритмы:"
for actual, predicted, descr in zip([actual_1, actual_1r],
[predicted_1 > T, predicted_1r > T],
["Typical careful:", "Typical risky:"]):
print descr, "precision =", precision_score(actual, predicted), "recall =", \
recall_score(actual, predicted), ";",\
"accuracy =", accuracy_score(actual, predicted)
print
print "Разные склонности алгоритмов к ошибкам FP и FN:"
for actual, predicted, descr in zip([actual_10, actual_11],
[predicted_10 > T, predicted_11 > T],
["Avoids FP:", "Avoids FN:"]):
print descr, "precision =", precision_score(actual, predicted), "recall =", \
recall_score(actual, predicted), ";",\
"accuracy =", accuracy_score(actual, predicted)
Explanation: Мы описали разные характеры векторов вероятностей. Далее мы будем смотреть, как метрики оценивают разные векторы предсказаний, поэтому обязательно выполните ячейки, создающие векторы для визуализации.
Метрики, оценивающие бинарные векторы предсказаний
Есть две типичные ситуации, когда специалисты по машинному обучению начинают изучать характеристики метрик качества:
1. при участии в соревновании или решении прикладной задачи, когда вектор предсказаний оценивается по конкретной метрике, и нужно построить алгоритм, максимизирующий эту метрику.
1. на этапе формализации задачи машинного обучения, когда есть требования прикладной области, и нужно предложить математическую метрику, которая будет соответствовать этим требованиям.
Далее мы вкратце рассмотрим каждую метрику с этих двух позиций.
Precision и recall; accuracy
Для начала разберемся с метриками, оценивающие качество уже после бинаризации по порогу T, то есть сравнивающие два бинарных вектора: actual и predicted.
Две популярные метрики - precision и recall. Первая показывает, как часто алгоритм предсказывает класс 1 и оказывается правым, а вторая - как много объектов класса 1 алгоритм нашел.
Также рассмотрим самую простую и известную метрику - accuracy; она показывает долю правильных ответов.
Выясним преимущества и недостатки этих метрик, попробовав их на разных векторах вероятностей.
End of explanation
from sklearn.metrics import precision_recall_curve
precs = []
recs = []
threshs = []
labels = ["Typical", "Avoids FP", "Avoids FN"]
for actual, predicted in zip([actual_1, actual_10, actual_11],
[predicted_1, predicted_10, predicted_11]):
prec, rec, thresh = precision_recall_curve(actual, predicted)
precs.append(prec)
recs.append(rec)
threshs.append(thresh)
plt.figure(figsize=(15, 5))
for i in range(3):
ax = plt.subplot(1, 3, i+1)
plt.plot(threshs[i], precs[i][:-1], label="precision")
plt.plot(threshs[i], recs[i][:-1], label="recall")
plt.xlabel("threshold")
ax.set_title(labels[i])
plt.legend()
Explanation: Все три метрики легко различают простые случаи хороших и плохих алгоритмов. Обратим внимание, что метрики имеют область значений [0, 1], и потому их легко интерпретировать.
Метрикам не важны величины вероятностей, им важно только то, сколько объектов неправильно зашли за установленную границу (в данном случае T = 0.5).
Метрика accuracy дает одинаковый вес ошибкам false positive и false negative, зато пара метрик precision и recall однозначно идентифицирует это различие. Собственно, их для того и используют, чтобы контролировать ошибки FP и FN.
Мы измерили три метрики, фиксировав порог T = 0.5, потому что для почти всех картинок он кажется оптимальным. Давайте посмотрим на последней (самой интересной для этих метрик) группе векторов, как меняются precision и recall при увеличении порога.
End of explanation
############### Programming assignment: problem 1 ###############
def write_answer_1(precision_1, recall_1, precision_10, recall_10, precision_11, recall_11):
answers = [precision_1, recall_1, precision_10, recall_10, precision_11, recall_11]
with open("pa_metrics_problem1.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
Explanation: При увеличении порога мы делаем меньше ошибок FP и больше ошибок FN, поэтому одна из кривых растет, а вторая - падает. По такому графику можно подобрать оптимальное значение порога, при котором precision и recall будут приемлемы. Если такого порога не нашлось, нужно обучать другой алгоритм.
Оговоримся, что приемлемые значения precision и recall определяются предметной областью. Например, в задаче определения, болен ли пациент определенной болезнью (0 - здоров, 1 - болен), ошибок false negative стараются избегать, требуя recall около 0.9. Можно сказать человеку, что он болен, и при дальнейшей диагностике выявить ошибку; гораздо хуже пропустить наличие болезни.
<font color="green" size=5>Programming assignment: problem 1. </font> Фиксируем порог T = 0.65; по графикам можно примерно узнать, чему равны метрики на трех выбранных парах векторов (actual, predicted). Вычислите точные precision и recall для этих трех пар векторов.
6 полученных чисел запишите в текстовый файл в таком порядке:
precision_1 recall_1 precision_10 recall_10 precision_11 recall_11
Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.
Передайте ответ в функцию write_answer_1. Полученный файл загрузите в форму.
End of explanation
from sklearn.metrics import f1_score
T = 0.5
print "Разные склонности алгоритмов к ошибкам FP и FN:"
for actual, predicted, descr in zip([actual_1, actual_10, actual_11],
[predicted_1 > T, predicted_10 > T, predicted_11 > T],
["Typical:", "Avoids FP:", "Avoids FN:"]):
print descr, "f1 =", f1_score(actual, predicted)
Explanation: F1-score
Очевидный недостаток пары метрик precision-recall - в том, что их две: непонятно, как ранжировать алгоритмы. Чтобы этого избежать, используют F1-метрику, которая равна среднему гармоническому precision и recall.
F1-метрика будет равна 1, если и только если precision = 1 и recall = 1 (идеальный алгоритм).
(: Обмануть F1 сложно: если одна из величин маленькая, а другая близка к 1 (по графикам видно, что такое соотношение иногда легко получить), F1 будет далека от 1. F1-метрику сложно оптимизировать, потому что для этого нужно добиваться высокой полноты и точности одновременно.
Например, посчитаем F1 для того же набора векторов, для которого мы строили графики (мы помним, что там одна из кривых быстро выходит в единицу).
End of explanation
############### Programming assignment: problem 2 ###############
many_scatters([actual_1, actual_10, actual_11], [predicted_1, predicted_10, predicted_11],
np.array(ks)*0.1, ["Typical", "Avoids FP", "Avoids FN"], (1, 3))
def write_answer_2(k_1, k_10, k_11):
answers = [k_1, k_10, k_11]
with open("pa_metrics_problem2.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
Explanation: F1-метрика в двух последних случаях, когда одна из парных метрик равна 1, значительно меньше, чем в первом, сбалансированном случае.
<font color="green" size=5>Programming assignment: problem 2. </font> На precision и recall влияют и характер вектора вероятностей, и установленный порог.
Для тех же пар (actual, predicted), что и в предыдущей задаче, найдите оптимальные пороги, максимизирующие F1_score. Будем рассматривать только пороги вида T = 0.1 * k, k - целое; соответственно, нужно найти три значения k. Если f1 максимизируется при нескольких значениях k, укажите наименьшее из них.
Запишите найденные числа k в следующем порядке:
k_1, k_10, k_11
Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.
Передайте ответ в функцию write_answer_2. Загрузите файл в форму.
Если вы запишите список из трех найденных k в том же порядке в переменную ks, то с помощью кода ниже можно визуализировать найденные пороги:
End of explanation
from sklearn.metrics import log_loss
print "Алгоритмы, разные по качеству:"
for actual, predicted, descr in zip([actual_0, actual_1, actual_2],
[predicted_0, predicted_1, predicted_2],
["Perfect:", "Typical:", "Awful:"]):
print descr, log_loss(actual, predicted)
print
print "Осторожный и рискующий алгоритмы:"
for actual, predicted, descr in zip([actual_0, actual_0r, actual_1, actual_1r],
[predicted_0, predicted_0r, predicted_1, predicted_1r],
["Ideal careful", "Ideal risky", "Typical careful:", "Typical risky:"]):
print descr, log_loss(actual, predicted)
print
print "Разные склонности алгоритмов к ошибкам FP и FN:"
for actual, predicted, descr in zip([actual_10, actual_11],
[predicted_10, predicted_11],
["Avoids FP:", "Avoids FN:"]):
print descr, log_loss(actual, predicted)
Explanation: Метрики, оценивающие векторы вероятностей класса 1
Рассмотренные метрики удобно интерпретировать, но при их использовании мы не учитываем большую часть информации, полученной от алгоритма. В некоторых задачах вероятности нужны в чистом виде, например, если мы предсказываем, выиграет ли команда в футбольном матче, и величина вероятности влияет на размер ставки за эту команду. Даже если в конце концов мы все равно бинаризуем предсказание, хочется следить за характером вектора вероятности.
Log_loss
Log_loss вычисляет правдоподобие меток в actual с вероятностями из predicted, взятое с противоположным знаком:
$log_loss(actual, predicted) = - \frac 1 n \sum_{i=1}^n (actual_i \cdot \log (predicted_i) + (1-actual_i) \cdot \log (1-predicted_i))$, $n$ - длина векторов.
Соответственно, эту метрику нужно минимизировать.
Вычислим ее на наших векторах:
End of explanation
############### Programming assignment: problem 3 ###############
def write_answer_3(wll_0, wll_1, wll_2, wll_0r, wll_1r, wll_10, wll_11):
answers = [wll_0, wll_1, wll_2, wll_0r, wll_1r, wll_10, wll_11]
with open("pa_metrics_problem3.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
Explanation: Как и предыдущие метрики, log_loss хорошо различает идеальный, типичный и плохой случаи. Но обратите внимание, что интерпретировать величину достаточно сложно: метрика не достигает нуля никогда и не имеет верхней границы. Поэтому даже для идеального алгоритма, если смотреть только на одно значение log_loss, невозможно понять, что он идеальный.
Но зато эта метрика различает осторожный и рискующий алгоритмы. Как мы видели выше, в случаях Typical careful и Typical risky количество ошибок при бинаризации по T = 0.5 примерно одинаковое, в случаях Ideal ошибок вообще нет. Однако за неудачно угаданные классы в Typical рискующему алгоритму приходится платить большим увеличением log_loss, чем осторожному алгоритму. С другой стороны, за удачно угаданные классы рискованный идеальный алгоритм получает меньший log_loss, чем осторожный идеальный алгоритм.
Таким образом, log_loss чувствителен и к вероятностям, близким к 0 и 1, и к вероятностям, близким к 0.5.
Ошибки FP и FN обычный Log_loss различать не умеет.
Однако нетрудно сделать обобщение log_loss на случай, когда нужно больше штрафовать FP или FN: для этого достаточно добавить выпуклую (то есть неотрицательную и суммирующуюся к единице) комбинацию из двух коэффициентов к слагаемым правдоподобия. Например, давайте штрафовать false positive:
$weighted_log_loss(actual, predicted) = -\frac 1 n \sum_{i=1}^n (0.3\, \cdot actual_i \cdot \log (predicted_i) + 0.7\,\cdot (1-actual_i)\cdot \log (1-predicted_i))$
Если алгоритм неверно предсказывает большую вероятность первому классу, то есть объект на самом деле принадлежит классу 0, то первое слагаемое в скобках равно нулю, а второе учитывается с большим весом.
<font color="green" size=5>Programming assignment: problem 3. </font> Напишите функцию, которая берет на вход векторы actual и predicted и возвращает модифицированный Log-Loss, вычисленный по формуле выше. Вычислите ее значение (обозначим его wll) на тех же векторах, на которых мы вычисляли обычный log_loss, и запишите в файл в следующем порядке:
wll_0 wll_1 wll_2 wll_0r wll_1r wll_10 wll_11
Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.
Передайте ответ в функцию write_answer3. Загрузите файл в форму.
End of explanation
from sklearn.metrics import roc_curve, roc_auc_score
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
aucs = ""
for actual, predicted, descr in zip([actual_0, actual_1, actual_2],
[predicted_0, predicted_1, predicted_2],
["Perfect", "Typical", "Awful"]):
fpr, tpr, thr = roc_curve(actual, predicted)
plt.plot(fpr, tpr, label=descr)
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplot(1, 3, 2)
for actual, predicted, descr in zip([actual_0, actual_0r, actual_1, actual_1r],
[predicted_0, predicted_0r, predicted_1, predicted_1r],
["Ideal careful", "Ideal Risky", "Typical careful", "Typical risky"]):
fpr, tpr, thr = roc_curve(actual, predicted)
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.plot(fpr, tpr, label=descr)
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplot(1, 3, 3)
for actual, predicted, descr in zip([actual_1, actual_10, actual_11],
[predicted_1, predicted_10, predicted_11],
["Typical", "Avoids FP", "Avoids FN"]):
fpr, tpr, thr = roc_curve(actual, predicted)
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.plot(fpr, tpr, label=descr)
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
print aucs
Explanation: Обратите внимание на разницу weighted_log_loss между случаями Avoids FP и Avoids FN.
ROC и AUC
При построении ROC-кривой (receiver operating characteristic) происходит варьирование порога бинаризации вектора вероятностей, и вычисляются величины, зависящие от числа ошибок FP и FN. Эти величины задаются так, чтобы в случае, когда существует порог для идеального разделения классов, ROC-кривая проходила через определенную точку - верхний левый угол квадрата [0, 1] x [0, 1]. Кроме того, она всегда проходит через левый нижний и правый верхний углы. Получается наглядная визуализация качества алгоритма. С целью охарактеризовать эту визуализацию численно, ввели понятие AUC - площадь под ROC-кривой.
Есть несложный и эффективный алгоритм, который за один проход по выборке вычисляет ROC-кривую и AUC, но мы не будем вдаваться в детали.
Построим ROC-кривые для наших задач:
End of explanation
############### Programming assignment: problem 4 ###############
def write_answer_4(T_0, T_1, T_2, T_0r, T_1r, T_10, T_11):
answers = [T_0, T_1, T_2, T_0r, T_1r, T_10, T_11]
with open("pa_metrics_problem4.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
Explanation: Чем больше объектов в выборке, тем более гладкой выглядит кривая (хотя на самом деле она все равно ступенчатая).
Как и ожидалось, кривые всех идеальных алгоритмов проходят через левый верхний угол. На первом графике также показана типичная ROC-кривая (обычно на практике они не доходят до "идеального" угла).
AUC рискующего алгоритма значительном меньше, чем у осторожного, хотя осторожный и рискущий идеальные алгоритмы не различаются по ROC или AUC. Поэтому стремиться увеличить зазор между интервалами вероятностей классов смысла не имеет.
Наблюдается перекос кривой в случае, когда алгоритму свойственны ошибки FP или FN. Однако по величине AUC это отследить невозможно (кривые могут быть симметричны относительно диагонали (0, 1)-(1, 0)).
После того, как кривая построена, удобно выбирать порог бинаризации, в котором будет достигнут компромисс между FP или FN. Порог соответствует точке на кривой. Если мы хотим избежать ошибок FP, нужно выбирать точку на левой стороне квадрата (как можно выше), если FN - точку на верхней стороне квадрата (как можно левее). Все промежуточные точки будут соответствовать разным пропорциям FP и FN.
<font color="green" size=5>Programming assignment: problem 4. </font> На каждой кривой найдите точку, которая ближе всего к левому верхнему углу (ближе в смысле обычного евклидова расстояния), этой точке соответствует некоторый порог бинаризации. Запишите в выходной файл пороги в следующем порядке:
T_0 T_1 T_2 T_0r T_1r T_10 T_11
Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.
Если порогов, минимизирующих расстояние, несколько, выберите наибольший.
Передайте ответ в функцию write_answer_4. Загрузите файл в форму.
Пояснение: функция roc_curve возвращает три значения: FPR (массив абсции точек ROC-кривой), TPR (массив ординат точек ROC-кривой) и thresholds (массив порогов, соответствующих точкам).
Рекомендуем отрисовывать найденную точку на графике с помощью функции plt.scatter.
End of explanation |
8,412 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 2
Imports
Step1: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX
Step2: Then use interact to create a user interface for exploring your function
Step3: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument
Step4: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 2
Imports
End of explanation
plt.xticks?
def plot_sin1(a,b):
x=np.linspace(0,4*np.pi,300)
plt.figure(figsize=(12,5))
plt.plot(x,np.sin((a*x)+b),'g-')
plt.xlim(0,4*np.pi)
plt.tick_params(direction='out')
plt.xticks([np.pi,2*np.pi,3*np.pi,4*np.pi])
plot_sin1(5., 3.4)
Explanation: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$.
End of explanation
interact(plot_sin1,a=(0,5.0),b=(-5.0,5.0))
assert True # leave this for grading the plot_sine1 exercise
Explanation: Then use interact to create a user interface for exploring your function:
a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$.
b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.
End of explanation
def plot_sine2(a,b,style):
x=np.linspace(0,4*np.pi,300)
plt.figure(figsize=(12,5))
plt.plot(x,np.sin((a*x)+b),style)
plt.xlim(0,4*np.pi)
plt.tick_params(direction='out')
plt.xticks([np.pi,2*np.pi,3*np.pi,4*np.pi])
raise NotImplementedError()
plot_sine2(4.0, -1.0, 'r--')
Explanation: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument:
dashed red: r--
blue circles: bo
dotted black: k.
Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.
End of explanation
interact(plot_sine2,a=(0,5.0),b=(-5.0,5.0),style=('b.','ko','r^'))
assert True # leave this for grading the plot_sine2 exercise
Explanation: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
End of explanation |
8,413 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SQL Alchemy Core Examples
This file contains SQLAlchemy core examples.
Test we have SQL Alchemy
Step1: Fetch an SQLite engine and create an in memory database
Step2: Now lets make a couple of tables and do some queries
Step3: ... and let's add a couple of users. First we make a statement for the insert, and then we would execute it.
Step4: ... and then actually put them into the database
Step5: ... and pull the data back out.
Step6: So the results are an iterator which gives up tuples.
However, we can get a list of dictionaries from the iterator
Step7: Lets add some item data
Step8: Now lets now join the two tables together
Step9: That's everything in both tables. But we have id_user in the second table.
Let's do a very simple filter
Step10: Functions?
Step11: That's great, but we have a convenience function as well
Step12: Group by
Step13: A final 'fun' query | Python Code:
import sqlalchemy
sqlalchemy.__version__
Explanation: SQL Alchemy Core Examples
This file contains SQLAlchemy core examples.
Test we have SQL Alchemy
End of explanation
from sqlalchemy import create_engine
engine = create_engine('sqlite:///:memory:')
Explanation: Fetch an SQLite engine and create an in memory database
End of explanation
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
metadata = MetaData()
user = Table('user', metadata,
Column('id_user', Integer, primary_key=True),
Column('name', String),
Column('age', Integer))
item = Table('item', metadata,
Column('id_item', Integer, primary_key=True),
Column('id_user', Integer, ForeignKey('user.id_user')),
Column('thing', String))
metadata.create_all(engine)
Explanation: Now lets make a couple of tables and do some queries
End of explanation
people = [
(1, 'Bob', '20'),
(2, 'Sally', '25'),
(3, 'John', '30')]
insert = user.insert()
print(insert)
for p in people:
stmt = insert.values(p)
print(stmt.compile().params)
Explanation: ... and let's add a couple of users. First we make a statement for the insert, and then we would execute it.
End of explanation
connection = engine.connect()
for p in people:
connection.execute(insert.values(p))
Explanation: ... and then actually put them into the database:
End of explanation
from sqlalchemy import select
users = connection.execute(select([user]))
print(users)
print(list(users))
Explanation: ... and pull the data back out.
End of explanation
users = connection.execute(select([user]))
for u in users:
i = u.items()
# print(i)
print(dict(i))
Explanation: So the results are an iterator which gives up tuples.
However, we can get a list of dictionaries from the iterator:
End of explanation
items = (
(1, 1, 'Peanuts'),
(2, 1, 'VW'),
(3, 1, 'iPad'),
(4, 2, 'Raisins'),
(5, 2, 'Fiat'),
(6, 2, 'Nexus 10'),
(7, 2, 'Timex'),
(8, 3, 'Caviar'),
(9, 3, 'Porche'),
(10, 3, 'Surface Pro'),
(11, 3, 'Rolex'),
(12, 3, 'Boat'),
(13, 3, 'Plane'))
insert = item.insert()
for i in items:
connection.execute(insert.values(i))
for x in connection.execute(select([item])):
print(x)
Explanation: Lets add some item data
End of explanation
stmt = select([user, item], use_labels=True)
for s in connection.execute(stmt):
print(s)
# print(s.items())
Explanation: Now lets now join the two tables together:
End of explanation
stmt = select([user, item]).where(user.c.id_user == item.c.id_user)
print(stmt)
for s in connection.execute(stmt):
print(s)
Explanation: That's everything in both tables. But we have id_user in the second table.
Let's do a very simple filter:
End of explanation
from sqlalchemy import func
stmt = select([func.count(user.c.id_user)])
result = connection.execute(stmt)
print(tuple(result))
Explanation: Functions?
End of explanation
result = connection.execute(stmt).scalar()
print(result)
Explanation: That's great, but we have a convenience function as well:
End of explanation
stmt = (select([user, func.count(user.c.id_user).label('item_count')])
.select_from(user.join(item))
.group_by(user.c.id_user))
print(stmt)
for s in connection.execute(stmt):
print(s.items())
Explanation: Group by
End of explanation
stmt1 = select([item.c.id_user]).where(item.c.thing.ilike('boat'))
print(connection.execute(stmt1).fetchone())
stmt2 = select([user]).where(user.c.id_user.in_(stmt1.alias()))
print(stmt2)
print(connection.execute(stmt2).fetchone())
Explanation: A final 'fun' query
End of explanation |
8,414 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: DELF と TensorFlow Hub を使用して画像を一致させる方法
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: データ
次のセルでは、画像の一致と比較を行うために、DELF で処理する 2 つの画像の URL を指定します。
Step3: 画像のダウンロード、サイズ変更、保存、および表示を行います。
Step4: データに DELF モジュールを適用する
DELF モジュールは、画像を入力として取り、特筆すべきポイントをベクトルで記述します。次のセルには、この Colab のロジックが含まれます。
Step5: ベクトルのロケーションと説明を使用して、画像を一致させる | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install scikit-image
from absl import logging
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image, ImageOps
from scipy.spatial import cKDTree
from skimage.feature import plot_matches
from skimage.measure import ransac
from skimage.transform import AffineTransform
from six import BytesIO
import tensorflow as tf
import tensorflow_hub as hub
from six.moves.urllib.request import urlopen
Explanation: DELF と TensorFlow Hub を使用して画像を一致させる方法
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf_hub_delf_module"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf_hub_delf_module.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf_hub_delf_module.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/tf_hub_delf_module.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
<td><a href="https://tfhub.dev/google/delf/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを見る</a></td>
</table>
TensorFlow Hub(TF-Hub)は、機械学習の知識を再利用可能なリソース、特にトレーニング済みのモジュールで共有するためのプラットフォームです。
この Colab では、キーポイントとディスクリプタを識別するための DELF ニューラルネットワークと画像を処理するロジックをパッケージ化するモジュールを使用します。ニューラルネットワークの重みは、こちらの論文で説明されたとおりに名所の画像でトレーニングされています。
セットアップ
End of explanation
#@title Choose images
images = "Bridge of Sighs" #@param ["Bridge of Sighs", "Golden Gate", "Acropolis", "Eiffel tower"]
if images == "Bridge of Sighs":
# from: https://commons.wikimedia.org/wiki/File:Bridge_of_Sighs,_Oxford.jpg
# by: N.H. Fischer
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/2/28/Bridge_of_Sighs%2C_Oxford.jpg'
# from https://commons.wikimedia.org/wiki/File:The_Bridge_of_Sighs_and_Sheldonian_Theatre,_Oxford.jpg
# by: Matthew Hoser
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/c/c3/The_Bridge_of_Sighs_and_Sheldonian_Theatre%2C_Oxford.jpg'
elif images == "Golden Gate":
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/1/1e/Golden_gate2.jpg'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/3/3e/GoldenGateBridge.jpg'
elif images == "Acropolis":
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/c/ce/2006_01_21_Ath%C3%A8nes_Parth%C3%A9non.JPG'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/5/5c/ACROPOLIS_1969_-_panoramio_-_jean_melis.jpg'
else:
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/d/d8/Eiffel_Tower%2C_November_15%2C_2011.jpg'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/a/a8/Eiffel_Tower_from_immediately_beside_it%2C_Paris_May_2008.jpg'
Explanation: データ
次のセルでは、画像の一致と比較を行うために、DELF で処理する 2 つの画像の URL を指定します。
End of explanation
def download_and_resize(name, url, new_width=256, new_height=256):
path = tf.keras.utils.get_file(url.split('/')[-1], url)
image = Image.open(path)
image = ImageOps.fit(image, (new_width, new_height), Image.ANTIALIAS)
return image
image1 = download_and_resize('image_1.jpg', IMAGE_1_URL)
image2 = download_and_resize('image_2.jpg', IMAGE_2_URL)
plt.subplot(1,2,1)
plt.imshow(image1)
plt.subplot(1,2,2)
plt.imshow(image2)
Explanation: 画像のダウンロード、サイズ変更、保存、および表示を行います。
End of explanation
delf = hub.load('https://tfhub.dev/google/delf/1').signatures['default']
def run_delf(image):
np_image = np.array(image)
float_image = tf.image.convert_image_dtype(np_image, tf.float32)
return delf(
image=float_image,
score_threshold=tf.constant(100.0),
image_scales=tf.constant([0.25, 0.3536, 0.5, 0.7071, 1.0, 1.4142, 2.0]),
max_feature_num=tf.constant(1000))
result1 = run_delf(image1)
result2 = run_delf(image2)
Explanation: データに DELF モジュールを適用する
DELF モジュールは、画像を入力として取り、特筆すべきポイントをベクトルで記述します。次のセルには、この Colab のロジックが含まれます。
End of explanation
#@title TensorFlow is not needed for this post-processing and visualization
def match_images(image1, image2, result1, result2):
distance_threshold = 0.8
# Read features.
num_features_1 = result1['locations'].shape[0]
print("Loaded image 1's %d features" % num_features_1)
num_features_2 = result2['locations'].shape[0]
print("Loaded image 2's %d features" % num_features_2)
# Find nearest-neighbor matches using a KD tree.
d1_tree = cKDTree(result1['descriptors'])
_, indices = d1_tree.query(
result2['descriptors'],
distance_upper_bound=distance_threshold)
# Select feature locations for putative matches.
locations_2_to_use = np.array([
result2['locations'][i,]
for i in range(num_features_2)
if indices[i] != num_features_1
])
locations_1_to_use = np.array([
result1['locations'][indices[i],]
for i in range(num_features_2)
if indices[i] != num_features_1
])
# Perform geometric verification using RANSAC.
_, inliers = ransac(
(locations_1_to_use, locations_2_to_use),
AffineTransform,
min_samples=3,
residual_threshold=20,
max_trials=1000)
print('Found %d inliers' % sum(inliers))
# Visualize correspondences.
_, ax = plt.subplots()
inlier_idxs = np.nonzero(inliers)[0]
plot_matches(
ax,
image1,
image2,
locations_1_to_use,
locations_2_to_use,
np.column_stack((inlier_idxs, inlier_idxs)),
matches_color='b')
ax.axis('off')
ax.set_title('DELF correspondences')
match_images(image1, image2, result1, result2)
Explanation: ベクトルのロケーションと説明を使用して、画像を一致させる
End of explanation |
8,415 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 开始使用 TensorBoard
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: 在本例中使用 MNIST 数据集。接下来编写一个函数对数据进行标准化,同时创建一个简单的Keras模型使图像分为10类。
Step3: 通过 Keras Model.fit() 使用 TensorBoard
当使用 Keras's Model.fit() 函数进行训练时, 添加 tf.keras.callback.TensorBoard 回调可确保创建和存储日志.另外,在每个时期启用 histogram_freq=1 的直方图计算功能(默认情况下处于关闭状态)
将日志放在带有时间戳的子目录中,以便轻松选择不同的训练运行。
Step4: 通过命令行 (command) 或在 notebook 体验中启动 TensorBoard ,这两个接口通常是相同的。 在 notebooks, 使用 %tensorboard 命令。 在命令行中, 运行不带“%”的相同命令。
Step5: <img class="tfo-display-only-on-site" src="https
Step6: 训练代码遵循 advanced quickstart 教程,但显示了如何将 log 记录到 TensorBoard 。 首先选择损失和优化器:
Step7: 创建可用于在训练期间累积值并在任何时候记录的有状态指标:
Step8: 定义训练和测试代码:
Step9: 设置摘要编写器,以将摘要写到另一个日志目录中的磁盘上:
Step10: 再次打开 TensorBoard,这次将其指向新的日志目录。 我们也可以启动 TensorBoard 来监视训练进度。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# Load the TensorBoard notebook extension
%load_ext tensorboard
import tensorflow as tf
import datetime
# Clear any logs from previous runs
!rm -rf ./logs/
Explanation: 开始使用 TensorBoard
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tensorboard/get_started"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/get_started.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/get_started.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tensorboard/get_started.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a>
</td>
</table>
在机器学习中,要改进模型的某些参数,您通常需要对其进行衡量。TensorBoard 是用于提供机器学习工作流程期间所需的测量和可视化的工具。 它使您能够跟踪实验指标,例如损失和准确性,可视化模型图,将嵌入物投影到较低维度的空间等等。
本快速入门将展示如何快速使用 TensorBoard 。该网站上的其余指南提供了有关特定功能的更多详细信息,此处未包括其中的许多功能。
End of explanation
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
Explanation: 在本例中使用 MNIST 数据集。接下来编写一个函数对数据进行标准化,同时创建一个简单的Keras模型使图像分为10类。
End of explanation
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
Explanation: 通过 Keras Model.fit() 使用 TensorBoard
当使用 Keras's Model.fit() 函数进行训练时, 添加 tf.keras.callback.TensorBoard 回调可确保创建和存储日志.另外,在每个时期启用 histogram_freq=1 的直方图计算功能(默认情况下处于关闭状态)
将日志放在带有时间戳的子目录中,以便轻松选择不同的训练运行。
End of explanation
%tensorboard --logdir logs/fit
Explanation: 通过命令行 (command) 或在 notebook 体验中启动 TensorBoard ,这两个接口通常是相同的。 在 notebooks, 使用 %tensorboard 命令。 在命令行中, 运行不带“%”的相同命令。
End of explanation
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
train_dataset = train_dataset.shuffle(60000).batch(64)
test_dataset = test_dataset.batch(64)
Explanation: <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/quickstart_model_fit.png?raw=1"/>
简要概述所显示的仪表板(顶部导航栏中的选项卡):
Scalars 显示损失和指标在每个时期如何变化。 您还可以使用它来跟踪训练速度,学习率和其他标量值。
Graphs 可帮助您可视化模型。 在这种情况下,将显示层的Keras图,这可以帮助您确保正确构建。
Distributions 和 Histograms 显示张量随时间的分布。 这对于可视化权重和偏差并验证它们是否以预期的方式变化很有用。
当您记录其他类型的数据时,会自动启用其他 TensorBoard 插件。 例如,使用 Keras TensorBoard 回调还可以记录图像和嵌入。 您可以通过单击右上角的“inactive”下拉列表来查看 TensorBoard 中还有哪些其他插件。
通过其他方法使用 TensorBoard
用以下方法训练时,例如 tf.GradientTape(), 会使用 tf.summary 记录所需的信息。
使用与上述相同的数据集,但将其转换为 tf.data.Dataset 以利用批处理功能:
End of explanation
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
Explanation: 训练代码遵循 advanced quickstart 教程,但显示了如何将 log 记录到 TensorBoard 。 首先选择损失和优化器:
End of explanation
# Define our metrics
train_loss = tf.keras.metrics.Mean('train_loss', dtype=tf.float32)
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('train_accuracy')
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('test_accuracy')
Explanation: 创建可用于在训练期间累积值并在任何时候记录的有状态指标:
End of explanation
def train_step(model, optimizer, x_train, y_train):
with tf.GradientTape() as tape:
predictions = model(x_train, training=True)
loss = loss_object(y_train, predictions)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_loss(loss)
train_accuracy(y_train, predictions)
def test_step(model, x_test, y_test):
predictions = model(x_test)
loss = loss_object(y_test, predictions)
test_loss(loss)
test_accuracy(y_test, predictions)
Explanation: 定义训练和测试代码:
End of explanation
current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_log_dir = 'logs/gradient_tape/' + current_time + '/train'
test_log_dir = 'logs/gradient_tape/' + current_time + '/test'
train_summary_writer = tf.summary.create_file_writer(train_log_dir)
test_summary_writer = tf.summary.create_file_writer(test_log_dir)
开始训练,在 summary writers 的范围内,在训练/测试期间使用 `tf.summary.scalar()` 记录指标(损失和准确性),以将摘要写入磁盘。 您可以控制要记录的指标以及记录的频率。 其他的 `tf.summary` 函数可以记录其他类型的数据。
model = create_model() # reset our model
EPOCHS = 5
for epoch in range(EPOCHS):
for (x_train, y_train) in train_dataset:
train_step(model, optimizer, x_train, y_train)
with train_summary_writer.as_default():
tf.summary.scalar('loss', train_loss.result(), step=epoch)
tf.summary.scalar('accuracy', train_accuracy.result(), step=epoch)
for (x_test, y_test) in test_dataset:
test_step(model, x_test, y_test)
with test_summary_writer.as_default():
tf.summary.scalar('loss', test_loss.result(), step=epoch)
tf.summary.scalar('accuracy', test_accuracy.result(), step=epoch)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print (template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
# Reset metrics every epoch
train_loss.reset_states()
test_loss.reset_states()
train_accuracy.reset_states()
test_accuracy.reset_states()
Explanation: 设置摘要编写器,以将摘要写到另一个日志目录中的磁盘上:
End of explanation
%tensorboard --logdir logs/gradient_tape
Explanation: 再次打开 TensorBoard,这次将其指向新的日志目录。 我们也可以启动 TensorBoard 来监视训练进度。
End of explanation |
8,416 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 3 – Classification
This notebook contains all the sample code and solutions to the exercises in chapter 3.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures
Step1: MNIST
Step2: Binary classifier
Step3: Note
Step4: ROC curves
Step6: Multiclass classification
Step7: Multilabel classification
Step8: Warning
Step9: Multioutput classification
Step10: Extra material
Dummy (ie. random) classifier
Step11: KNN classifier | Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "classification"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
Explanation: Chapter 3 – Classification
This notebook contains all the sample code and solutions to the exercises in chapter 3.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
mnist
X, y = mnist["data"], mnist["target"]
X.shape
y.shape
28*28
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
some_digit = X[36000]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap = matplotlib.cm.binary,
interpolation="nearest")
plt.axis("off")
save_fig("some_digit_plot")
plt.show()
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = matplotlib.cm.binary,
interpolation="nearest")
plt.axis("off")
# EXTRA
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = matplotlib.cm.binary, **options)
plt.axis("off")
plt.figure(figsize=(9,9))
example_images = np.r_[X[:12000:600], X[13000:30600:600], X[30600:60000:590]]
plot_digits(example_images, images_per_row=10)
save_fig("more_digits_plot")
plt.show()
y[36000]
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
import numpy as np
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
Explanation: MNIST
End of explanation
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(random_state=42)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([some_digit])
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, random_state=42)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = (y_train_5[train_index])
X_test_fold = X_train[test_index]
y_test_fold = (y_train_5[test_index])
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
y_train_perfect_predictions = y_train_5
confusion_matrix(y_train_5, y_train_perfect_predictions)
from sklearn.metrics import precision_score, recall_score
precision_score(y_train_5, y_train_pred)
4344 / (4344 + 1307)
recall_score(y_train_5, y_train_pred)
4344 / (4344 + 1077)
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
4344 / (4344 + (1077 + 1307)/2)
y_scores = sgd_clf.decision_function([some_digit])
y_scores
threshold = 0
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
threshold = 200000
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,
method="decision_function")
Explanation: Binary classifier
End of explanation
y_scores.shape
# hack to work around issue #9589 introduced in Scikit-Learn 0.19.0
if y_scores.ndim == 2:
y_scores = y_scores[:, 1]
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2)
plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2)
plt.xlabel("Threshold", fontsize=16)
plt.legend(loc="upper left", fontsize=16)
plt.ylim([0, 1])
plt.figure(figsize=(8, 4))
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.xlim([-700000, 700000])
save_fig("precision_recall_vs_threshold_plot")
plt.show()
(y_train_pred == (y_scores > 0)).all()
y_train_pred_90 = (y_scores > 70000)
precision_score(y_train_5, y_train_pred_90)
recall_score(y_train_5, y_train_pred_90)
def plot_precision_vs_recall(precisions, recalls):
plt.plot(recalls, precisions, "b-", linewidth=2)
plt.xlabel("Recall", fontsize=16)
plt.ylabel("Precision", fontsize=16)
plt.axis([0, 1, 0, 1])
plt.figure(figsize=(8, 6))
plot_precision_vs_recall(precisions, recalls)
save_fig("precision_vs_recall_plot")
plt.show()
Explanation: Note: there is an issue introduced in Scikit-Learn 0.19.0 where the result of cross_val_predict() is incorrect in the binary classification case when using method="decision_function", as in the code above. The resulting array has an extra first dimension full of 0s. We need to add this small hack for now to work around this issue:
End of explanation
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr, tpr)
save_fig("roc_curve_plot")
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,
method="predict_proba")
y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest)
plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, "b:", linewidth=2, label="SGD")
plot_roc_curve(fpr_forest, tpr_forest, "Random Forest")
plt.legend(loc="lower right", fontsize=16)
save_fig("roc_curve_comparison_plot")
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)
precision_score(y_train_5, y_train_pred_forest)
recall_score(y_train_5, y_train_pred_forest)
Explanation: ROC curves
End of explanation
sgd_clf.fit(X_train, y_train)
sgd_clf.predict([some_digit])
some_digit_scores = sgd_clf.decision_function([some_digit])
some_digit_scores
np.argmax(some_digit_scores)
sgd_clf.classes_
sgd_clf.classes_[5]
from sklearn.multiclass import OneVsOneClassifier
ovo_clf = OneVsOneClassifier(SGDClassifier(random_state=42))
ovo_clf.fit(X_train, y_train)
ovo_clf.predict([some_digit])
len(ovo_clf.estimators_)
forest_clf.fit(X_train, y_train)
forest_clf.predict([some_digit])
forest_clf.predict_proba([some_digit])
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy")
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring="accuracy")
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
def plot_confusion_matrix(matrix):
If you prefer color and a colorbar
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
cax = ax.matshow(matrix)
fig.colorbar(cax)
plt.matshow(conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_plot", tight_layout=False)
plt.show()
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_errors_plot", tight_layout=False)
plt.show()
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
save_fig("error_analysis_digits_plot")
plt.show()
Explanation: Multiclass classification
End of explanation
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit])
Explanation: Multilabel classification
End of explanation
y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3)
f1_score(y_multilabel, y_train_knn_pred, average="macro")
Explanation: Warning: the following cell may take a very long time (possibly hours depending on your hardware).
End of explanation
noise = np.random.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = np.random.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
some_index = 5500
plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index])
save_fig("noisy_digit_example_plot")
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)
save_fig("cleaned_digit_example_plot")
Explanation: Multioutput classification
End of explanation
from sklearn.dummy import DummyClassifier
dmy_clf = DummyClassifier()
y_probas_dmy = cross_val_predict(dmy_clf, X_train, y_train_5, cv=3, method="predict_proba")
y_scores_dmy = y_probas_dmy[:, 1]
fprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dmy)
plot_roc_curve(fprr, tprr)
Explanation: Extra material
Dummy (ie. random) classifier
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_jobs=-1, weights='distance', n_neighbors=4)
knn_clf.fit(X_train, y_train)
y_knn_pred = knn_clf.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_knn_pred)
from scipy.ndimage.interpolation import shift
def shift_digit(digit_array, dx, dy, new=0):
return shift(digit_array.reshape(28, 28), [dy, dx], cval=new).reshape(784)
plot_digit(shift_digit(some_digit, 5, 1, new=100))
X_train_expanded = [X_train]
y_train_expanded = [y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
shifted_images = np.apply_along_axis(shift_digit, axis=1, arr=X_train, dx=dx, dy=dy)
X_train_expanded.append(shifted_images)
y_train_expanded.append(y_train)
X_train_expanded = np.concatenate(X_train_expanded)
y_train_expanded = np.concatenate(y_train_expanded)
X_train_expanded.shape, y_train_expanded.shape
knn_clf.fit(X_train_expanded, y_train_expanded)
y_knn_expanded_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_knn_expanded_pred)
ambiguous_digit = X_test[2589]
knn_clf.predict_proba([ambiguous_digit])
plot_digit(ambiguous_digit)
Explanation: KNN classifier
End of explanation |
8,417 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example on the use of correspondence tables
In this simple example it is shown how a vector classified according to one classification is converted into another classification
The first classification has four categories
Step1: Let's create an arbitrary classification matrix (CM)
Step2: Notice that moving from the first classification to the second one is possible since the 'totals' of rows are all equal to 1 (see below the other way around)
Step3: Let's create an arbitrary vector classified according to the first classification
Step4: This vector is converted into the second classification
Step5: Moving from second classifcation to the second one may cause problems, since the "totals" of columns is not always 1. | Python Code:
import numpy as np
import pandas as pd
Explanation: Example on the use of correspondence tables
In this simple example it is shown how a vector classified according to one classification is converted into another classification
The first classification has four categories: A, B, C, D
The second classification has three categories: 1, 2, 3
End of explanation
CM=pd.DataFrame.from_items([('A', [1, 0, 0]), ('B', [0, 1, 0]),('C', [0, 0, 1]),('D', [0, 0, 1])],
orient='index', columns=['1', '2', '3'])
display (CM)
Explanation: Let's create an arbitrary classification matrix (CM)
End of explanation
CM_tot2=CM.sum(axis=1)
C2=CM
C2['total']=CM_tot2
display (C2)
Explanation: Notice that moving from the first classification to the second one is possible since the 'totals' of rows are all equal to 1 (see below the other way around)
End of explanation
V1 = np.random.randint(0, 10, size=4).reshape(4, 1)
Class_A = [_ for _ in 'ABCD']
V1_A = pd.DataFrame(V1, index=Class_A, columns = ['amount'])
display (V1_A)
Explanation: Let's create an arbitrary vector classified according to the first classification
End of explanation
V1_A_transp=pd.DataFrame.transpose(V1_A)
V1_B= pd.DataFrame((np.dot(V1_A_transp, CM)), index=['amount'], columns = ['1','2','3'])
display (V1_B)
Explanation: This vector is converted into the second classification
End of explanation
sum_row = {col: CM[col].sum() for col in CM}
#sum_row =CM.sum()
sum_rCM = pd.DataFrame(sum_row, index=["Total"])
CM_tot = CM.append(sum_rCM)
display (CM_tot)
Explanation: Moving from second classifcation to the second one may cause problems, since the "totals" of columns is not always 1.
End of explanation |
8,418 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let´s reproject to Alberts or something with distance
Step1: Uncomment to reproject
proj string taken from
Step2: The area is very big -> 35000 points.
We need to make a subset of this
Step3: Residuals
Step4: Residuals of $ Biomass ~ SppRich + Z(x,y) + \epsilon $
Using all data
Model Analysis with the empirical variogram
Step5: Fitting the empirical variogram to into a theoretical model
Step6: Valid parametric empirical variogram
That, covariance matrix always postive semi-definite.
Gaussian Model
$$\gamma (h)=(s-n)\left(1-\exp \left(-{\frac {h^{2}}{r^{2}a}}\right)\right)+n1_{{(0,\infty )}}(h)$$
Step7: Exponential Model
$$\gamma (h)=(s-n)(1-\exp(-h/(ra)))+n1_{{(0,\infty )}}(h)$$
Step8: Spherical Variogram
$$\gamma (h)=(s-n)\left(\left({\frac {3h}{2r}}-{\frac {h^{3}}{2r^{3}}}\right)1_{{(0,r)}}(h)+1_{{[r,\infty )}}(h)\right)+n1_{{(0,\infty )}}(h))$$ | Python Code:
new_data.crs = {'init':'epsg:4326'}
Explanation: Let´s reproject to Alberts or something with distance
End of explanation
new_data = new_data.to_crs("+proj=aea +lat_1=29.5 +lat_2=45.5 +lat_0=37.5 +lon_0=-96 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs ")
new_data['newLon'] = new_data.apply(lambda c : c.geometry.x, axis=1)
new_data['newLat'] = new_data.apply(lambda c : c.geometry.y, axis=1)
new_data['logBiomass'] = np.log(new_data.plotBiomass)
new_data['logSppN'] = np.log(new_data.SppN)
## Let´s make a simple linear trend here.
import statsmodels.api as sm
import statsmodels.formula.api as smf
## All data
### Now with statsmodels.api
#xx = X.SppN.values.reshape(-1,1)
#xx = sm.add_constant(xx)
#model = sm.OLS(Y.values.reshape(-1,1),xx)
model = smf.ols(formula='logBiomass ~ logSppN',data=new_data)
results = model.fit()
param_model = results.params
results.summary()
new_data['residuals1'] = results.resid
Explanation: Uncomment to reproject
proj string taken from: http://spatialreference.org/
End of explanation
# COnsider the the following subregion
section = new_data[lambda x: (x.LON > -100) & (x.LON < -85) & (x.LAT > 30) & (x.LAT < 35) ]
section.plot(column='SppN')
section.plot(column='plotBiomass')
section.shape
Explanation: The area is very big -> 35000 points.
We need to make a subset of this
End of explanation
Y_hat = results.predict(section)
ress = (section.logBiomass - Y_hat)
param_model.Intercept
conf_int = results.conf_int(alpha=0.05)
plt.scatter(section.logSppN,section.plotBiomass)
#plt.plot(section.SppN,param_model.Intercept + param_model.SppN * section.SppN)
plt.plot(section.logSppN,Y_hat)
#plt.fill_between(Y_hat,Y_hat + conf_int , Y_hat - conf_int)
conf_int
plt.scatter(section.logSppN,section.residuals1)
plt.scatter(section.newLon,section.newLat,c=section.residuals1)
plt.colorbar()
# Import GPFlow
import GPflow as gf
k = gf.kernels.Matern12(2, lengthscales=0.2, active_dims = [0,1] ) + gf.kernels.Constant(2,active_dims=[0,1])
results.resid.plot.hist()
model = gf.gpr.GPR(section[['newLon','newLat']].as_matrix(),section.residuals1.values.reshape(-1,1),k)
%time model.optimize()
k.get_parameter_dict()
model.get_parameter_dict()
import numpy as np
Nn = 300
dsc = section
predicted_x = np.linspace(min(dsc.newLon),max(dsc.newLon),Nn)
predicted_y = np.linspace(min(dsc.newLat),max(dsc.newLat),Nn)
Xx, Yy = np.meshgrid(predicted_x,predicted_y)
## Fake richness
fake_sp_rich = np.ones(len(Xx.ravel()))
predicted_coordinates = np.vstack([ Xx.ravel(), Yy.ravel()]).transpose()
#predicted_coordinates = np.vstack([section.SppN, section.newLon,section.newLat]).transpose()
predicted_coordinates.shape
means,variances = model.predict_y(predicted_coordinates)
sum(means)
fig = plt.figure(figsize=(16,10), dpi= 80, facecolor='w', edgecolor='w')
#plt.pcolor(Xx,Yy,np.sqrt(variances.reshape(Nn,Nn))) #,cmap=plt.cm.Greens)
plt.pcolormesh(Xx,Yy,np.sqrt(variances.reshape(Nn,Nn)))
plt.colorbar()
plt.scatter(dsc.newLon,dsc.newLat,c=dsc.SppN,edgecolors='')
plt.title("VAriance Biomass")
plt.colorbar()
import cartopy
plt.figure(figsize=(17,11))
proj = cartopy.crs.PlateCarree()
ax = plt.subplot(111, projection=proj)
ax = plt.axes(projection=proj)
#algo = new_data.plot(column='SppN',ax=ax,cmap=colormap,edgecolors='')
#ax.set_extent([-93, -70, 30, 50])
#ax.set_extent([-100, -60, 20, 50])
ax.set_extent([-95, -70, 25, 45])
#ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.add_feature(cartopy.feature.COASTLINE)
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.add_feature(cartopy.feature.LAKES, alpha=0.9)
ax.stock_img()
#ax.add_geometries(new_data.geometry,crs=cartopy.crs.PlateCarree())
#ax.add_feature(cartopy.feature.RIVERS)
mm = ax.pcolormesh(Xx,Yy,means.reshape(Nn,Nn),transform=proj )
#cs = plt.contour(Xx,Yy,np.sqrt(variances).reshape(Nn,Nn),linewidths=2,cmap=plt.cm.Greys_r,linestyles='dotted')
cs = plt.contour(Xx,Yy,means.reshape(Nn,Nn),linewidths=2,colors='k',linestyles='dotted',levels=[4.0,5.0,6.0,7.0,8.0])
plt.clabel(cs, fontsize=16,inline=True,fmt='%1.1f')
#ax.scatter(new_data.lon,new_data.lat,c=new_data.error,edgecolors='',transform=proj,cmap=plt.cm.Greys,alpha=0.2)
plt.colorbar(mm)
plt.title("Predicted Species Richness")
#(x.LON > -90) & (x.LON < -80) & (x.LAT > 40) & (x.LAT < 50)
Explanation: Residuals
End of explanation
from external_plugins.spystats import tools
filename = "../HEC_runs/results/low_q/data_envelope.csv"
envelope_data = pd.read_csv(filename)
gvg = tools.Variogram(new_data,'logBiomass',using_distance_threshold=600000)
gvg.envelope = envelope_data
gvg.empirical = gvg.envelope.variogram
gvg.lags = gvg.envelope.lags
vdata = gvg.envelope.dropna()
gvg.plot(refresh=False)
Explanation: Residuals of $ Biomass ~ SppRich + Z(x,y) + \epsilon $
Using all data
Model Analysis with the empirical variogram
End of explanation
from scipy.optimize import curve_fit
s = 0.345
r = 100000.0
nugget = 0.33
init_vals = [0.34, 50000, 0.33] # for [amp, cen, wid]
best_vals_gaussian, covar_gaussian = curve_fit(exponentialVariogram, xdata=vdata.lags.values, ydata=vdata.variogram.values, p0=init_vals)
#best_vals_gaussian, covar_gaussian = curve_fit(exponentialVariogram, xdata=vdata.lags, ydata=vdata.variogram, p0=init_vals)
#best_vals_gaussian, covar_gaussian = curve_fit(sphericalVariogram, xdata=vdata.lags, ydata=vdata.variogram, p0=init_vals)
v
gaussianVariogram(hx)
s =best_vals[0]
r = best_vals[1]
nugget = best_vals[2]
fitted_gaussianVariogram = lambda x : exponentialVariogram(x,sill=s,range_a=r,nugget=nugget)
gammas = pd.DataFrame(map(fitted_gaussianVariogram,hx))
import functools
fitted_gaussian2 = functools.partial(gaussianVariogram,s,r,nugget)
hx = np.linspace(0,600000,100)
vg = tools.Variogram(section,'residuals1',using_distance_threshold=500000)
## already fitted previously
s = 0.345255240992
r = 65857.797111
nugget = 0.332850902482
Explanation: Fitting the empirical variogram to into a theoretical model
End of explanation
def gaussianVariogram(h,sill=0,range_a=0,nugget=0):
Ih = 1.0 if h >= 0 else 0.0
g_h = ((sill - nugget)*(1 - np.exp(-(h**2 / range_a**2)))) + nugget*Ih
return g_h
Explanation: Valid parametric empirical variogram
That, covariance matrix always postive semi-definite.
Gaussian Model
$$\gamma (h)=(s-n)\left(1-\exp \left(-{\frac {h^{2}}{r^{2}a}}\right)\right)+n1_{{(0,\infty )}}(h)$$
End of explanation
def exponentialVariogram(h,sill=0,range_a=0,nugget=0):
if isinstance(h,np.array):
Ih = [1.0 if hx >= 0.0 else 0.0 for hx in h]
else:
Ih = 1.0 if h >= 0 else 0.0
g_h = (sill - nugget)*(1 - np.exp(-h/range_a)) + (nugget*Ih)
return g_h
h = 2
[1.0 if hx >= 0.0 else 0.0 for hx in i
Explanation: Exponential Model
$$\gamma (h)=(s-n)(1-\exp(-h/(ra)))+n1_{{(0,\infty )}}(h)$$
End of explanation
def sphericalVariogram(h,sill=0,range_a=0,nugget=0):
Ih = 1.0 if h >= 0 else 0.0
I0r = 1.0 if h <= range_a else 0.0
Irinf = 1.0 if h > range_a else 0.0
g_h = (sill - nugget)((3*h / float(2*range_a))*I0r + Irinf) - (h**3 / float(2*range_a)) + (nugget*Ih)
return g_h
def theoreticalVariogram(model_function,sill,range_a,nugget):
return lambda x : model_function(x,sill,range_a,nugget)
tvariogram = theoreticalVariogram(gaussianVariogram,s,r,nugget)
tvariogram = theoreticalVariogram(,s,r,nugget)
%time gs = np.array(map(tvariogram,hx))
x = vg.plot(with_envelope=True,num_iterations=30,refresh=False)
plt.plot(hx,gs,color='blue')
import statsmodels.regression.linear_model as lm
Mdist = vg.distance_coordinates.flatten()
%time vars = np.array(map(tvariogram,Mdist))
CovMat = vars.reshape(len(section),len(section))
X = section.logSppN.values
Y = section.logBiomass.values
plt.imshow(CovMat)
%time model = lm.GLS(Y,X,sigma=CovMat)
%time results = model.fit()
new_data.residuals
Explanation: Spherical Variogram
$$\gamma (h)=(s-n)\left(\left({\frac {3h}{2r}}-{\frac {h^{3}}{2r^{3}}}\right)1_{{(0,r)}}(h)+1_{{[r,\infty )}}(h)\right)+n1_{{(0,\infty )}}(h))$$
End of explanation |
8,419 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python and Natural Language Technologies
Lecture 03, Week 04
Object oriented programming
27 September 2017
Introduction
Python has been object oriented since its first version
basically everything is an object including
class definitions
functions
modules
PEP8 defines style guidelines for classes as well
Defining classes
class keyword
instance explicitly bound to the first parameter of each method
named self by convention
__init__ is called after the instance is created
not exactly a constructor because the instance already exists
not mandatory
Step1: Class attributes
data attributes
Step2: Attributes can be added to instances
Step3: this will not affect other instances
Step4: __init__ may have arguments
Step5: Method attributes
functions inside the class definition
explicitly take the instance as first parameter
Step6: Calling methods
instance.method(param)
class.method(instance, param)
Step7: Special attributes
every object has a number of special attributes
double underscore or dunder notation
Step8: Data hiding with name mangling
by default every attribute is public
private attributes can be defined through name mangling
every attribute with at least two leading underscores and at most one trailing underscore is replaced with a mangled attribute
emulates private behavior
mangled name
Step9: Class attributes
class attributes are class-global attributes
roughly the same as static attributes in C++
Step10: Accessing class attributes via instances
Step11: Accessing class attributes via the class object
Step12: Setting the class object via the class
Step13: Cannot set via an instance
Step14: because this assignment creates a new attribute in the instance's namespace.
Step15: each object has a __class__ magic attribute that accesses the class object.
We can use this to access the class attribute
Step16: a2 has not shadowed class_attr, so we can access it through the instance
Step17: Inheritance
Python supports inheritance and multiple inheritance
Step18: New style vs. old style classes
Python 2
Python 2.2 introduced a new inheritance mechanism
new style classes vs. old style classes
class is new style if it subclasses object or one of its predecessors subclasses object
wide range of previously unavailable functionality
old style classes are the default in Python 2
Python 3
only supports new style classes
every class implicitly subclasses object
The differences between old style and new style classes are listed here
Step19: Python 3 implicitly subclasses object
Step20: Method inheritance
Methods are inherited and overridden in the usual way
Step21: Since data attributes can be created anywhere, they are only inherited if the code in the base class' method is called.
Step22: Calling the base class's constructor
since __init__ is not a constructor, the base class' init is not called automatically, if the subclass overrides it
Step23: The base class's methods can be called in at least two ways
Step24: super's usage was more complicated in Python 2
Step25: A complete example using super in the subclass's init
Step26: Duck typing and interfaces
no built-in mechanism for interfacing
the Abstract Base Classes (abc) module implements interface-like features
not used extensively in Python in favor of duck typing
"In computer programming, duck typing is an application of the duck test in type safety. It requires that type checking be deferred to runtime, and is implemented by means of dynamic typing or reflection." -- Wikipedia
"If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck." -- Wikipedia
allows polymorphism without abstract base classes
Step27: NotImplementedError
emulating C++'s pure virtual function
Step28: we can still instantiate A
Step29: Magic methods
mechanism to implement advanced OO features
dunder methods
__str__ method
returns the string representation of the object
Python 2 has two separate methods __str__ and __unicode__ for bytestrings and unicode strings
Step30: Operator overloading
operators are mapped to magic functions
defining these functions defines/overrides operators
comprehensive list of operator functions are here
some built-in functions are included as well
__len__
Step31: How can we define comparison between different types?
Let's define a comparison between Complex and strings. We can check for the right operand's type
Step32: if the built-in type is the left-operand for which comparison against Complex is not defined, the operands are automatically swithced
Step33: Defining __gt__ does not automatically define __lt__
Step34: Assignment operator
the assignment operator (=) cannot be overridden
it performs reference binding instead of copying
tightly bound to the garbage collector
Other useful overloads
Attributes can be set, get and deleted. 4 magic methods govern these
Step35: getting an attribute that doesn't exist yet calls
getattribute first, which calls the base class' getattribute which fails
getattr is called.
Step36: setting an attribute
Step37: getting an existing attribute
Step38: modifying an attribute also calls __setattr__
Step39: deleting an attribute
Step40: Dictionary-like behavior can be achieved by overloading []
We also define __iter__ to support iteration.
Step41: Shallow copy vs. deep copy
There are 3 types of assignment and copying
Step42: Shallow copy
Step43: Deep copy
Step44: Both can be defined via magic methods
note that these implementations do not check for infinite loops
Step45: However, these are very far from complete implementations. We need to take care of preventing infinite loops and support for pickling (serialization module).
Object creation and destruction
Step46: Object introspection
support for full object introspection
dir lists every attribute of an object
Step47: Class A does not have a value attribute, since it is bounded to an instance. However, it does have the class global var attribute.
An instance of A has both
Step48: isinstance, issubclass
Step49: Every object has a __code__ attribute, which contains everything needed to call the function.
Step50: The inspect module provides further code introspection tools, including the getsourcelines function, which returns the source code itself.
Step51: Class decorators
Many OO features are achieved via a syntax sugar called decorators. We will talk about decorators in detail later.
The most common features are
Step52: Class methods
bound to the class instead of an instance of the class
first argument is a class instance
called cls by convention
typical usage
Step53: Properties
attributes with getters, setters and deleters
Properties are attributes with getters, setters and deleters. Property works as both a built-in function and as separate decorators.
Step54: Multiple inheritance
no interface inheritance in Python
since every class subclasses object, the diamond problem is present
method resolution order (MRO) defines the way methods are inherited
very different between old and new style classes | Python Code:
class ClassWithInit:
def __init__(self):
pass
class ClassWithoutInit:
pass
Explanation: Introduction to Python and Natural Language Technologies
Lecture 03, Week 04
Object oriented programming
27 September 2017
Introduction
Python has been object oriented since its first version
basically everything is an object including
class definitions
functions
modules
PEP8 defines style guidelines for classes as well
Defining classes
class keyword
instance explicitly bound to the first parameter of each method
named self by convention
__init__ is called after the instance is created
not exactly a constructor because the instance already exists
not mandatory
End of explanation
class A:
def __init__(self):
self.attr1 = 42
def method(self):
self.attr2 = 43
a = A()
print(a.attr1)
# print(a.attr2) # raises AttributeError
a.method()
print(a.attr2)
Explanation: Class attributes
data attributes: these correspond to data members in C++
methods: these correspond to methods in C++
both are
created upon assignment
can be assigned anywhere (not just in __init__)
End of explanation
a.attr3 = 11
print(a.attr3)
Explanation: Attributes can be added to instances
End of explanation
a2 = A()
# a2.attr3 # raises AttributeError
Explanation: this will not affect other instances
End of explanation
class InitWithArguments:
def __init__(self, value, value_with_default=42):
self.attr = value
self.solution_of_the_world = value_with_default
class InitWithVariableNumberOfArguments:
def __init__(self, *args, **kwargs):
self.val1 = args[0]
self.val2 = kwargs.get('important_param', 42)
obj1 = InitWithArguments(41)
obj2 = InitWithVariableNumberOfArguments(1, 2, 3, param4="apple", important_param=23)
print(obj1.attr, obj1.solution_of_the_world,
obj2.val1, obj2.val2)
Explanation: __init__ may have arguments
End of explanation
class A:
def foo(self):
print("foo called")
def bar(self, param):
print("bar called with parameter {}".format(param))
Explanation: Method attributes
functions inside the class definition
explicitly take the instance as first parameter
End of explanation
c = A()
c.foo()
c.bar(42)
A.foo(c)
A.bar(c, 43)
Explanation: Calling methods
instance.method(param)
class.method(instance, param)
End of explanation
', '.join(A.__dict__)
Explanation: Special attributes
every object has a number of special attributes
double underscore or dunder notation: __attribute__
automatically created
advanced OOP features are implemented using these
End of explanation
class A:
def __init__(self):
self.__private_attr = 42
def foo(self):
self.__private_attr += 1
a = A()
a.foo()
# print(a.__private_attr) # raises AttributeError
a.__dict__
print(a._A__private_attr) # name mangled
a.__dict__
Explanation: Data hiding with name mangling
by default every attribute is public
private attributes can be defined through name mangling
every attribute with at least two leading underscores and at most one trailing underscore is replaced with a mangled attribute
emulates private behavior
mangled name: __classname_attrname
End of explanation
class A:
class_attr = 42
Explanation: Class attributes
class attributes are class-global attributes
roughly the same as static attributes in C++
End of explanation
a1 = A()
a1.class_attr
Explanation: Accessing class attributes via instances
End of explanation
A.class_attr
Explanation: Accessing class attributes via the class object
End of explanation
a1 = A()
a2 = A()
print(a1.class_attr, a2.class_attr)
A.class_attr = 43
a1.class_attr, a2.class_attr
Explanation: Setting the class object via the class
End of explanation
a1 = A()
a2 = A()
a1.class_attr = 11
a2.class_attr
Explanation: Cannot set via an instance
End of explanation
a1.__dict__
Explanation: because this assignment creates a new attribute in the instance's namespace.
End of explanation
a1.__class__.class_attr
Explanation: each object has a __class__ magic attribute that accesses the class object.
We can use this to access the class attribute:
End of explanation
a2.__dict__, a2.class_attr
Explanation: a2 has not shadowed class_attr, so we can access it through the instance
End of explanation
class A:
pass
class B(A):
pass
a = A()
b = B()
print(isinstance(a, B))
print(isinstance(b, A))
print(issubclass(B, A))
print(issubclass(A, B))
Explanation: Inheritance
Python supports inheritance and multiple inheritance
End of explanation
%%python2
class OldStyleClass:
pass
class NewStyleClass(object):
pass
class ThisIsAlsoNewStyleClass(NewStyleClass):
pass
Explanation: New style vs. old style classes
Python 2
Python 2.2 introduced a new inheritance mechanism
new style classes vs. old style classes
class is new style if it subclasses object or one of its predecessors subclasses object
wide range of previously unavailable functionality
old style classes are the default in Python 2
Python 3
only supports new style classes
every class implicitly subclasses object
The differences between old style and new style classes are listed here: https://wiki.python.org/moin/NewClassVsClassicClass
End of explanation
class A: pass
class B(object): pass
print(issubclass(A, object))
print(issubclass(B, object))
Explanation: Python 3 implicitly subclasses object
End of explanation
class A(object):
def foo(self):
print("A.foo was called")
def bar(self):
print("A.bar was called")
class B(A):
def foo(self):
print("B.foo was called")
b = B()
b.foo()
b.bar()
Explanation: Method inheritance
Methods are inherited and overridden in the usual way
End of explanation
class A(object):
def foo(self):
self.value = 42
class B(A):
pass
b = B()
print(b.__dict__)
a = A()
print(a.__dict__)
a.foo()
print(a.__dict__)
Explanation: Since data attributes can be created anywhere, they are only inherited if the code in the base class' method is called.
End of explanation
class A(object):
def __init__(self):
print("A.__init__ called")
class B(A):
def __init__(self):
print("B.__init__ called")
class C(A): pass
b = B()
c = C()
Explanation: Calling the base class's constructor
since __init__ is not a constructor, the base class' init is not called automatically, if the subclass overrides it
End of explanation
class A(object):
def __init__(self):
print("A.__init__ called")
class B(A):
def __init__(self):
A.__init__(self)
print("B.__init__ called")
class C(B):
def __init__(self):
super().__init__()
print("C.__init__ called")
print("Instantiating B")
b = B()
print("Instantiating C")
c = C()
Explanation: The base class's methods can be called in at least two ways:
1. explicitely via the class name
1. using the super function
End of explanation
%%python2
class A(object):
def __init__(self):
print("A.__init__ called")
class B(A):
def __init__(self):
A.__init__(self)
print("B.__init__ called")
class C(A):
def __init__(self):
super(C, self).__init__()
print("B.__init__ called")
print("Instantiating B")
b = B()
print("Instantiating C")
c = C()
Explanation: super's usage was more complicated in Python 2
End of explanation
class Person(object):
def __init__(self, name, age):
self.name = name
self.age = age
def __str__(self):
return "{0}, age {1}".format(self.name, self.age)
class Employee(Person):
def __init__(self, name, age, position, salary):
self.position = position
self.salary = salary
super().__init__(name, age)
def __str__(self):
return "{0}, position: {1}, salary: {2}".format(super().__str__(), self.position, self.salary)
e = Employee("Jakab Gipsz", 33, "manager", 450000)
print(e)
print(Person(e.name, e.age))
Explanation: A complete example using super in the subclass's init:
End of explanation
class Cat(object):
def make_sound(self):
self.mieuw()
def mieuw(self):
print("Mieuw")
class Dog(object):
def make_sound(self):
self.bark()
def bark(self):
print("Vau")
animals = [Cat(), Dog()]
for animal in animals:
# animal must have a make_sound method
animal.make_sound()
Explanation: Duck typing and interfaces
no built-in mechanism for interfacing
the Abstract Base Classes (abc) module implements interface-like features
not used extensively in Python in favor of duck typing
"In computer programming, duck typing is an application of the duck test in type safety. It requires that type checking be deferred to runtime, and is implemented by means of dynamic typing or reflection." -- Wikipedia
"If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck." -- Wikipedia
allows polymorphism without abstract base classes
End of explanation
class A(object):
def foo(self):
raise NotImplementedError()
class B(A):
def foo(self):
print("Yay.")
class C(A): pass
b = B()
b.foo()
c = C()
# c.foo() # NotImplementedError why does this happen?
Explanation: NotImplementedError
emulating C++'s pure virtual function
End of explanation
a = A()
Explanation: we can still instantiate A
End of explanation
class ClassWithoutStr(object):
def __init__(self, value=42):
self.param = value
class ClassWithStr(object):
def __init__(self, value=42):
self.param = value
def __str__(self):
return "My id is {0} and my parameter is {1}".format(
id(self), self.param)
print("Printint a class that does not __str__: {}".format(ClassWithoutStr(345)))
print("Printint a class that defines __str__: {}".format(ClassWithStr(345)))
Explanation: Magic methods
mechanism to implement advanced OO features
dunder methods
__str__ method
returns the string representation of the object
Python 2 has two separate methods __str__ and __unicode__ for bytestrings and unicode strings
End of explanation
class Complex(object):
def __init__(self, real=0.0, imag=0.0):
self.real = real
self.imag = imag
def __abs__(self):
return (self.real**2 + self.imag**2) ** 0.5
def __eq__(self, other): # right hand side
return self.real == other.real and self.imag == other.imag
def __gt__(self, other):
return abs(self) > abs(other)
c1 = Complex()
c2 = Complex(1, 1)
abs(c2), c1 == c2
Explanation: Operator overloading
operators are mapped to magic functions
defining these functions defines/overrides operators
comprehensive list of operator functions are here
some built-in functions are included as well
__len__: defines the behavior of len(obj)
__abs__: defines the behavior of abs(obj)
End of explanation
class Complex(object):
def __init__(self, real=0.0, imag=0.0):
self.real = real
self.imag = imag
def __abs__(self):
return (self.real**2 + self.imag**2) ** 0.5
def __eq__(self, other): # right hand side
return self.real == other.real and self.imag == other.imag
def __gt__(self, other):
if isinstance(other, str):
return abs(self) > len(other)
return abs(self) > abs(other)
c1 = Complex()
c2 = Complex(1, 1)
abs(c2), c1 == c2, c2 > "a", c2 > "ab"
Explanation: How can we define comparison between different types?
Let's define a comparison between Complex and strings. We can check for the right operand's type:
End of explanation
"a" < c2
Explanation: if the built-in type is the left-operand for which comparison against Complex is not defined, the operands are automatically swithced:
End of explanation
# "a" > c2 # raises TypeError
Explanation: Defining __gt__ does not automatically define __lt__:
End of explanation
class Noisy(object):
def __setattr__(self, attr, value):
print("Setting [{}] to value [{}]".format(attr, value))
super().__setattr__(attr, value)
def __getattr__(self, attr):
print("Getting (getattr) [{}]".format(attr))
super().__getattr__(attr)
def __getattribute__(self, attr):
print("Getting (getattribute) [{}]".format(attr))
super().__getattribute__(attr)
def __delattr__(self, attr):
print("You wish")
Explanation: Assignment operator
the assignment operator (=) cannot be overridden
it performs reference binding instead of copying
tightly bound to the garbage collector
Other useful overloads
Attributes can be set, get and deleted. 4 magic methods govern these:
__setattr__: called when we set an attribute,
__delattr__: called when we delete an attribute using del or delattr
__getattribute__: called when accessing attributes
__getattr__: called when the 'usual' attribute lookup fails (for example the attribute is not present in the object's namespace
End of explanation
a = Noisy()
try:
a.dog
except AttributeError:
print("AttributeError raised")
Explanation: getting an attribute that doesn't exist yet calls
getattribute first, which calls the base class' getattribute which fails
getattr is called.
End of explanation
a.dog = "vau" # equivalent to setattr(a, "dog", "vau")
Explanation: setting an attribute
End of explanation
a.dog # equivalent to getattr(a, "dog")
Explanation: getting an existing attribute
End of explanation
a.dog = "Vau" # equivalent to setattr(a, "dog", "Vau")
Explanation: modifying an attribute also calls __setattr__
End of explanation
del a.dog # equivalent to delattr(a, "dog")
Explanation: deleting an attribute
End of explanation
class DictLike(object):
def __init__(self):
self.d = {}
def __setitem__(self, item, value):
print("Setting {} to {}".format(item, value))
self.d[item] = value
def __getitem__(self, item):
print("Getting {}".format(item))
return self.d.get(item, None)
def __iter__(self):
return iter(self.d)
d = DictLike()
d["a"] = 1
d["b"] = 2
for k in d:
print(k)
Explanation: Dictionary-like behavior can be achieved by overloading []
We also define __iter__ to support iteration.
End of explanation
l1 = [[1, 2], [3, 4, 5]]
l2 = l1
id(l1[0]) == id(l2[0])
l1[0][0] = 10
l2
Explanation: Shallow copy vs. deep copy
There are 3 types of assignment and copying:
the assignment operator (=) creates a new reference to the same object,
copy performs shallow copy,
deepcopy recursively deepcopies everything.
The difference between shallow and deep copy is only relevant for compound objects.
Assignment operator
End of explanation
from copy import copy
l1 = [[1, 2], [3, 4, 5]]
l2 = copy(l1)
id(l1) == id(l2), id(l1[0]) == id(l2[0])
l1[0][0] = 10
l2
Explanation: Shallow copy
End of explanation
from copy import deepcopy
l1 = [[1, 2], [3, 4, 5]]
l2 = deepcopy(l1)
id(l1) == id(l2), id(l1[0]) == id(l2[0])
l1[0][0] = 10
l2
Explanation: Deep copy
End of explanation
from copy import copy, deepcopy
class ListOfLists(object):
def __init__(self, lists):
self.lists = lists
self.list_lengths = [len(l) for l in self.lists]
def __copy__(self):
print("ListOfLists copy called")
return ListOfLists(self.lists)
def __deepcopy__(self, memo):
print("ListOfLists deepcopy called")
return ListOfLists(deepcopy(self.lists))
l1 = ListOfLists([[1, 2], [3, 4, 5]])
l2 = copy(l1)
l1.lists[0][0] = 12
print(l2.lists)
l3 = deepcopy(l1)
Explanation: Both can be defined via magic methods
note that these implementations do not check for infinite loops
End of explanation
class A(object):
@classmethod
def __new__(cls, *args, **kwargs):
instance = super().__new__(cls)
print("A.__new__ called")
return instance
def __init__(self):
print("A.__init__ called")
def __del__(self):
print("A.__del__ called")
try:
super(A, self).__del__()
except AttributeError:
print("parent class does not have a __del__ method")
a = A()
del a
Explanation: However, these are very far from complete implementations. We need to take care of preventing infinite loops and support for pickling (serialization module).
Object creation and destruction: the __new__ and the __del__ method
The __new__ method is called to create a new instance of a class. __new__ is a static method that takes the class object as a first parameter.
Typical implementations create a new instance of the class by invoking the superclass’s __new__() method using super(currentclass, cls).__new__(cls[, ...]) with appropriate arguments and then modifying the newly-created instance as necessary before returning it.
__new__ has to return an instance of cls, on which __init__ is called.
The __del__ method is called when an object is about to be destroyed.
Although technically a destructor, it is handled by the garbage collector.
It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.
End of explanation
class A(object):
var = 12
def __init__(self, value):
self.value = value
def foo(self):
print("bar")
", ".join(dir(A))
Explanation: Object introspection
support for full object introspection
dir lists every attribute of an object
End of explanation
", ".join(dir(A(12)))
Explanation: Class A does not have a value attribute, since it is bounded to an instance. However, it does have the class global var attribute.
An instance of A has both:
End of explanation
class A(object):
pass
class B(A):
pass
b = B()
a = A()
print(isinstance(a, A))
print(isinstance(a, B))
print(isinstance(b, A))
print(isinstance(b, object))
Explanation: isinstance, issubclass
End of explanation
def evaluate(x):
a = 12
b = 3
return a*x + b
print(evaluate.__code__)
#dir(evaluate.__code__)
evaluate.__code__.co_varnames, evaluate.__code__.co_freevars, evaluate.__code__.co_stacksize
Explanation: Every object has a __code__ attribute, which contains everything needed to call the function.
End of explanation
from inspect import getsourcelines
getsourcelines(evaluate)
Explanation: The inspect module provides further code introspection tools, including the getsourcelines function, which returns the source code itself.
End of explanation
class A(object):
instance_count = 0
def __init__(self, value=42):
self.value = value
A.increase_instance_count()
@staticmethod
def increase_instance_count():
A.instance_count += 1
a1 = A()
print(A.instance_count)
a2 = A()
print(A.instance_count)
Explanation: Class decorators
Many OO features are achieved via a syntax sugar called decorators. We will talk about decorators in detail later.
The most common features are:
staticmethod,
classmethod,
property.
Static methods
defined inside a class but not bound to an instance (no self parameter)
analogous to C++'s static methods
End of explanation
class Complex(object):
def __init__(self, real, imag):
self.real = real
self.imag = imag
def __str__(self):
return '{0}+j{1}'.format(self.real, self.imag)
@classmethod
def from_str(cls, complex_str):
real, imag = complex_str.split('+')
imag = imag.lstrip('ij')
print("Instantiating {}".format(cls.__name__))
return cls(float(real), float(imag))
class ChildComplex(Complex): pass
c1 = Complex.from_str("3.45+j2")
print(c1)
c2 = Complex(3, 4)
print(c2)
c1 = ChildComplex.from_str("3.45+j2")
Explanation: Class methods
bound to the class instead of an instance of the class
first argument is a class instance
called cls by convention
typical usage: factory methods for the class
Let's create a Complex class that can be initialized with either a string such as "5+j6" or with two numbers.
End of explanation
class Person(object):
def __init__(self, name, age):
self.name = name
self.age = age
@property
def age(self):
return self._age
@age.setter
def age(self, age):
try:
if 0 <= age <= 150:
self._age = age
except TypeError:
pass
def __str__(self):
return "Name: {0}, age: {1}".format(self.name, self.age)
p = Person("John", 12)
print(p)
p.age = "abc"
print(p)
p.age = 85
print(p)
p = Person("Pete", 17)
",".join(dir(p))
Explanation: Properties
attributes with getters, setters and deleters
Properties are attributes with getters, setters and deleters. Property works as both a built-in function and as separate decorators.
End of explanation
class A(object):
def __init__(self, value):
print("A init called")
self.value = value
class B(object):
def __init__(self):
print("B init called")
class C(A, B):
def __init__(self, value1, value2):
print("C init called")
self.value2 = value2
super(C, self).__init__(value1)
class D(B, A): pass
print("Instantiating C")
c = C(1, 2)
print("Instantiating D")
d = D()
Explanation: Multiple inheritance
no interface inheritance in Python
since every class subclasses object, the diamond problem is present
method resolution order (MRO) defines the way methods are inherited
very different between old and new style classes
End of explanation |
8,420 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finite Time of Integration (fti)
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Relevant Parameters
An 'exptime' parameter exists for each lc dataset and is set to 0.0 by default. This defines the exposure time that should be used when fti is enabled. As stated in its description, the time stamp of each datapoint is defined to be the time of mid-exposure. Note that the exptime applies to all times in the dataset - if times have different exposure-times, then they must be split into separate datasets manually.
Step3: Let's set the exposure time to 1 hr to make the convolution obvious in our 1-day default binary.
Step4: An 'fti_method' parameter exists for each set of compute options and each lc dataset. By default this is set to 'none' - meaning that the exposure times are ignored during b.run_compute().
Step5: Once we set fti_method to be 'oversample', the corresponding 'fti_oversample' parameter(s) become visible. This option defines how many different time-points PHOEBE should sample over the width of the exposure time and then average to return a single flux point. By default this is set to 5.
Note that increasing this number will result in better accuracy of the convolution caused by the exposure time - but increases the computation time essentially linearly. By setting to 5, our computation time will already be almost 5 times that when fti is disabled.
Step6: Influence on Light Curves
Step7: The phase-smearing (convolution) caused by the exposure time is most evident in areas of the light curve with sharp derivatives, where the flux changes significantly over the course of the single exposure. Here we can see that the 1-hr exposure time significantly changes the observed shapes of ingress and egress as well as the observed depth of the eclipse. | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Finite Time of Integration (fti)
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
print(b['exptime'])
Explanation: Relevant Parameters
An 'exptime' parameter exists for each lc dataset and is set to 0.0 by default. This defines the exposure time that should be used when fti is enabled. As stated in its description, the time stamp of each datapoint is defined to be the time of mid-exposure. Note that the exptime applies to all times in the dataset - if times have different exposure-times, then they must be split into separate datasets manually.
End of explanation
b['exptime'] = 1, 'hr'
Explanation: Let's set the exposure time to 1 hr to make the convolution obvious in our 1-day default binary.
End of explanation
print(b['fti_method'])
b['fti_method'] = 'oversample'
Explanation: An 'fti_method' parameter exists for each set of compute options and each lc dataset. By default this is set to 'none' - meaning that the exposure times are ignored during b.run_compute().
End of explanation
print(b['fti_oversample'])
Explanation: Once we set fti_method to be 'oversample', the corresponding 'fti_oversample' parameter(s) become visible. This option defines how many different time-points PHOEBE should sample over the width of the exposure time and then average to return a single flux point. By default this is set to 5.
Note that increasing this number will result in better accuracy of the convolution caused by the exposure time - but increases the computation time essentially linearly. By setting to 5, our computation time will already be almost 5 times that when fti is disabled.
End of explanation
b.run_compute(fti_method='none', irrad_method='none', model='fti_off')
b.run_compute(fti_method='oversample', irrad_method='none', model='fit_on')
Explanation: Influence on Light Curves
End of explanation
afig, mplfig = b.plot(show=True, legend=True)
Explanation: The phase-smearing (convolution) caused by the exposure time is most evident in areas of the light curve with sharp derivatives, where the flux changes significantly over the course of the single exposure. Here we can see that the 1-hr exposure time significantly changes the observed shapes of ingress and egress as well as the observed depth of the eclipse.
End of explanation |
8,421 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The model in theory
We are going to use 4 features
Step1: Read data
Step2: Plot
Step3: Price
Step4: MACD
Step5: Stochastics Oscillator
Step6: Average True Range
Step7: Create complete DataFrame & Save Data | Python Code:
def MACD(df,period1,period2,periodSignal):
EMA1 = pd.DataFrame.ewm(df,span=period1).mean()
EMA2 = pd.DataFrame.ewm(df,span=period2).mean()
MACD = EMA1-EMA2
Signal = pd.DataFrame.ewm(MACD,periodSignal).mean()
Histogram = MACD-Signal
return Histogram
def stochastics_oscillator(df,period):
l, h = pd.DataFrame.rolling(df, period).min(), pd.DataFrame.rolling(df, period).max()
k = 100 * (df - l) / (h - l)
return k
def ATR(df,period):
'''
Method A: Current High less the current Low
'''
df['H-L'] = abs(df['High']-df['Low'])
df['H-PC'] = abs(df['High']-df['Close'].shift(1))
df['L-PC'] = abs(df['Low']-df['Close'].shift(1))
TR = df[['H-L','H-PC','L-PC']].max(axis=1)
return TR.to_frame()
Explanation: The model in theory
We are going to use 4 features: The price itself and three extra technical indicators.
- MACD (Trend)
- Stochastics (Momentum)
- Average True Range (Volume)
Functions
Exponential Moving Average: Is a type of infinite impulse response filter that applies weighting factors which decrease exponentially. The weighting for each older datum decreases exponentially, never reaching zero.
<img src="https://www.bionicturtle.com/images/uploads/WindowsLiveWriterGARCHapproachandExponentialsmoothingEWMA_863image_16.png">
MACD: The Moving Average Convergence/Divergence oscillator (MACD) is one of the simplest and most effective momentum indicators available. The MACD turns two trend-following indicators, moving averages, into a momentum oscillator by subtracting the longer moving average from the shorter moving average.
<img src="http://i68.tinypic.com/289ie1l.png">
Stochastics oscillator: The Stochastic Oscillator is a momentum indicator that shows the location of the close relative to the high-low range over a set number of periods.
<img src="http://i66.tinypic.com/2vam3uo.png">
Average True Range: Is an indicator to measure the volalitility (NOT price direction). The largest of:
- Method A: Current High less the current Low
- Method B: Current High less the previous Close (absolute value)
- Method C: Current Low less the previous Close (absolute value)
<img src="http://d.stockcharts.com/school/data/media/chart_school/technical_indicators_and_overlays/average_true_range_atr/atr-1-trexam.png" width="400px">
Calculation:
<img src="http://i68.tinypic.com/e0kggi.png">
End of explanation
df = pd.read_csv('EURUSD.csv',usecols=[1,2,3,4])
df = df.iloc[::-1]
df["Close"] = (df["Close"].str.split()).apply(lambda x: float(x[0].replace(',', '.')))
df["Open"] = (df["Open"].str.split()).apply(lambda x: float(x[0].replace(',', '.')))
df["High"] = (df["High"].str.split()).apply(lambda x: float(x[0].replace(',', '.')))
df["Low"] = (df["Low"].str.split()).apply(lambda x: float(x[0].replace(',', '.')))
dfPrices = pd.read_csv('EURUSD.csv',usecols=[1])
dfPrices = dfPrices.iloc[::-1]
dfPrices["Close"] = (dfPrices["Close"].str.split()).apply(lambda x: float(x[0].replace(',', '.')))
dfPrices.head(2)
Explanation: Read data
End of explanation
price = dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)].as_matrix().ravel()
Explanation: Plot
End of explanation
prices = dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)].as_matrix().ravel()
plt.figure(figsize=(25,7))
plt.plot(prices,label='Test',color='black')
plt.title('Price')
plt.legend(loc='upper left')
plt.show()
Explanation: Price
End of explanation
macd = MACD(dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)],12,26,9)
plt.figure(figsize=(25,7))
plt.plot(macd,label='macd',color='red')
plt.title('MACD')
plt.legend(loc='upper left')
plt.show()
Explanation: MACD
End of explanation
stochastics = stochastics_oscillator(dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)],14)
plt.figure(figsize=(14,7))
#First 100 points because it's too dense
plt.plot(stochastics[0:100],label='Stochastics Oscillator',color='blue')
plt.title('Stochastics Oscillator')
plt.legend(loc='upper left')
plt.show()
Explanation: Stochastics Oscillator
End of explanation
atr = ATR(df.iloc[len(df.index)-60:len(df.index)],14)
plt.figure(figsize=(21,7))
#First 100 points because it's too dense
plt.plot(atr[0:100],label='ATR',color='green')
plt.title('Average True Range')
plt.legend(loc='upper left')
plt.show()
Explanation: Average True Range
End of explanation
dfPriceShift = dfPrices.shift(-1)
dfPriceShift.rename(columns={'Close':'CloseTarget'}, inplace=True)
dfPriceShift.head(2)
macd = MACD(dfPrices,12,26,9)
macd.rename(columns={'Close':'MACD'}, inplace=True)
stochastics = stochastics_oscillator(dfPrices,14)
stochastics.rename(columns={'Close':'Stochastics'}, inplace=True)
atr = ATR(df,14)
atr.rename(columns={0:'ATR'}, inplace=True)
final_data = pd.concat([dfPrices,dfPriceShift,macd,stochastics,atr], axis=1)
# Delete the entries with missing values (where the stochastics couldn't be computed yet) because have a lot of datapoints ;)
final_data = final_data.dropna()
final_data.info()
final_data
final_data.to_csv('EURUSD_TechnicalIndicators.csv',index=False)
Explanation: Create complete DataFrame & Save Data
End of explanation |
8,422 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align
Step2: 1. 狗股理论进行选股
狗股理论是美国基金经理迈克尔·奥希金斯于1991年提出的一种投资策略。
投资股票是为了获取回报,纸上富贵固然令人热血沸腾,但现金收入才是实实在在的回报。现金收入来自哪里?可以是公司分红,也可以是卖出股票后获得的差价,即股息和资本利得。
公司股息取决于经营状况,如果业绩稳定,分红政策稳定,那么就可以有稳定的现金回报。股价虽然最终取决于公司经营状况,但在很多情况下会发生背离,在市场火爆时股价被捧上天,在市场低迷时又被打入谷底,所以与股息回报相比,资本利得的稳定性不高,不易把握。
只所以被称作狗股理论是因为将股票比做骨头,将股息比喻为骨头上的肉,狗会先吃肉最多的骨头。
狗股策略具体的做法是,投资者每年年底从道琼斯工业平均指数成份股中找出10只股息率最高的股票,第二年买入这10支股票,一年后再找出10只股息率最高的成分股,卖出手中不在名单中的股票,买入新上榜单的股票,每年年初年底都重复这一投资动作,便可获取超过大盘的回报。
下面首先延用周期突破策略做为买入因子,卖出策略也还是继续延用,先回测未使用狗骨选股策略的情况,如下:
Step3: 狗股理论使用的是参考值为股息率,很多基于狗股理论的选股策略进行了基因变种,如使用PEG替换股息率进行选股,或者直接使用上一年度的涨幅值做为选股参数,基于基本面数据进行选股的示例将在之后的章节进行示例,本节首先基于涨幅值做为狗股选股的参数。
abupy中内置的选股因子AbuPickStockNTop即是在选股周期上对多只股票涨跌幅进行排序,选取top n个股票做为交易目标,如下示例使用AbuPickStockNTop在选股周期内选择涨幅最大的top3做为交易目标,如下:
Step4: 结果如上所示,下面从交易单中可以看到整个择时周期内策略只对三支股票进行了择时:
Step5: 上面虽然也达到了选股的目的,但是整个择时周期内只运行了一次选股策略,即只完成了静态选股,后面的择时周期内很有可能随着时间的推进,选股目标发生了变化,为了解决这个问题,可以将选股策略序列做为择时因子策略的一个参数进行构造,如下所示,这样选股策略即做为择时策略的一个专属因子,它默认在择时周期内每一个月都会重新进行一次选股策略,即完成了动态选股(也可以设置参数改变周期),使用如下所示:
Step6: 结果如上所示,下面从交易单中可以看到整个择时周期内策略对多支股票进行了择时,并非只有3支:
Step7: 下面更进一步打印出择时周期内每一个月的择时交易目标,可以看到每一个月最多的交易目标数量是3个,即这个月选股结果的3个交易目标都发生了突破的情况,如下所示:
Step11: 上面的选股策略分别使用静态和动态两种模式实现,有些类似静态市盈率和动态市盈率的计算。
2. 选股策略与择时策略的配合
本节的重点是讲解择时策略和选股策略的配合,达到1+1>2的目标,首先你要彻底理解你的策略,上面示例的狗骨策略选取的是正向涨幅最大的top支股票,为配合这个选股结果最简单的择时策略可以是每一个月进行一次买入操作,买入上一个月涨幅最高的top支股票,持有一个月后卖出,是一个趋势跟踪的策略:
Step12: 针对美股市场的回测如上所示,下面输出交易单可以看到每一个月在月底都买入了3支股票,持有一个月后卖出操作,如下:
Step13: 下面将市场切换到沙盒数据中的A股以相同的选股,择时策略进行回测,如下:
Step14: 上面使用的择时策略AbuWeekMonthBuy本身是一个中性策略,没有明确的方向,即择时策略本身不属于趋势跟踪也不属于均值回复,通过搭配不同的选股策略才能明确择时策略本身的属性,如上例配合形成的趋势跟踪策略。
将上例AbuPickStockNTop选股策略中的direction_top=-1, AbuPickStockNTop选股策略中direction_top参数的意义为选取方向,默认为1,即选取涨幅最高的n_top个股票,传递-1即选取跌幅最高的n_top个股票,这样搭配上AbuWeekMonthBuy最终生效的策略将变成一个均值回复策略,买入上一个月跌幅最高的top支股票,每一个月进行一次买入操作, 持有一个月后卖出,如下所示:
Step17: 3. 进一步制造非均衡,提高交易概率优势
上面示例了中性择时策略配合选股策略分别形成趋势跟踪与均值回复策略,下面将示例通过选股策略配合原本带属性的择时策略,进一步在整个策略中制造非均衡,提高交易概率优势
下面在上一节使用的AbuFactorBuyWD的基础上配合使用AbuPickStockNTop做为选股策略,AbuFactorBuyWD的择时策略为
默认以40天为周期(8周)结合涨跌阀值计算周几适合买入
回测运行中每一个月重新计算一次上述的周几适合买入
在策略日任务中买入信号为:昨天下跌,今天开盘也下跌,且明天是计算出来的上涨概率大的'周几'
下面构建AbuPickStockNTop时也以xd=40天为一个周期,计算周期内跌幅最大的3个股票,即整个策略在上面3条择时策略的基础上又添加了一条如下:
4. 每一个月计算一次上两个月(xd=40)中跌幅最大的3支股票做为交易目标
实现如下所示:
Step18: 回测结果上所示,下面使用上一节不使用选股策略进行配合的回测进行对比,如下: | Python Code:
# 基础库导入
from __future__ import print_function
from __future__ import division
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
from abupy import AbuFactorSellNDay, AbuFactorBuyWD, AbuPickStockNTop
from abupy import AbuFactorBuyBreak, AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuWeekMonthBuy
from abupy import abu, AbuFactorCloseAtrNStop, ABuProgress, AbuMetricsBase, EMarketTargetType
Explanation: ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第27节 狗股选股策略与择时策略的配合</b></font>
</center>
作者: 阿布
阿布量化版权所有 未经允许 禁止转载
abu量化系统github地址 (欢迎+star)
本节ipython notebook
上一节示例了周期均值回复短线择时策略,本节的内容将讲解选股策略与择时策略相互配合的示例。
择时与选股操作是交易系统中两大重点,它们之间的关系是相辅相成的,比如上一节实现的AbuFactorBuyWD策略本质上属于均值回复策略,因为它的买入前提是昨天下跌,今天开盘继续下跌,但是明天是周期内统计上涨概率最大的‘星期几’,量化交易本质上在策略中要做事情只有一个制造非均衡交易环境,目的就是为了达到最终非均衡结果赢的钱比输的多,择时策略和选股策略都是为了这一目标在努力,不同的择时策略应该搭配上适合的选股策略来达到1+1>2的目标,只有将选股和择时配合好,并且彻底理解你的策略,最终才能有好的结果。
首先导入本节需要使用的abupy中的模块:
End of explanation
cash = 3000000
# 延用周期突破策略做为买入因子
buy_factors = [{'xd': 21, 'class': AbuFactorBuyBreak},
{'xd': 42, 'class': AbuFactorBuyBreak}]
# 卖出策略也还是继续延用
sell_factors = [
{'stop_loss_n': 1.0, 'stop_win_n': 3.0,
'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}
]
def run_loo_back(choice_symbols, ps=None, n_folds=2, start=None, end=None, only_info=False):
封装一个回测函数
if choice_symbols[0].startswith('us'):
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_US
else:
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
abu_result_tuple, _ = abu.run_loop_back(cash,
buy_factors,
sell_factors,
ps,
start=start,
end=end,
n_folds=n_folds,
choice_symbols=choice_symbols)
ABuProgress.clear_output()
AbuMetricsBase.show_general(*abu_result_tuple, returns_cmp=only_info,
only_info=only_info,
only_show_returns=True)
return abu_result_tuple
# 使用沙盒内的美股做为回测目标
us_choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL',
'usGOOG', 'usWUBA', 'usVIPS']
_ = run_loo_back(us_choice_symbols)
Explanation: 1. 狗股理论进行选股
狗股理论是美国基金经理迈克尔·奥希金斯于1991年提出的一种投资策略。
投资股票是为了获取回报,纸上富贵固然令人热血沸腾,但现金收入才是实实在在的回报。现金收入来自哪里?可以是公司分红,也可以是卖出股票后获得的差价,即股息和资本利得。
公司股息取决于经营状况,如果业绩稳定,分红政策稳定,那么就可以有稳定的现金回报。股价虽然最终取决于公司经营状况,但在很多情况下会发生背离,在市场火爆时股价被捧上天,在市场低迷时又被打入谷底,所以与股息回报相比,资本利得的稳定性不高,不易把握。
只所以被称作狗股理论是因为将股票比做骨头,将股息比喻为骨头上的肉,狗会先吃肉最多的骨头。
狗股策略具体的做法是,投资者每年年底从道琼斯工业平均指数成份股中找出10只股息率最高的股票,第二年买入这10支股票,一年后再找出10只股息率最高的成分股,卖出手中不在名单中的股票,买入新上榜单的股票,每年年初年底都重复这一投资动作,便可获取超过大盘的回报。
下面首先延用周期突破策略做为买入因子,卖出策略也还是继续延用,先回测未使用狗骨选股策略的情况,如下:
End of explanation
stock_pickers = [{'class': AbuPickStockNTop,
'symbol_pool': us_choice_symbols, 'n_top': 3}]
abu_result_tuple = run_loo_back(us_choice_symbols, stock_pickers)
Explanation: 狗股理论使用的是参考值为股息率,很多基于狗股理论的选股策略进行了基因变种,如使用PEG替换股息率进行选股,或者直接使用上一年度的涨幅值做为选股参数,基于基本面数据进行选股的示例将在之后的章节进行示例,本节首先基于涨幅值做为狗股选股的参数。
abupy中内置的选股因子AbuPickStockNTop即是在选股周期上对多只股票涨跌幅进行排序,选取top n个股票做为交易目标,如下示例使用AbuPickStockNTop在选股周期内选择涨幅最大的top3做为交易目标,如下:
End of explanation
set(abu_result_tuple.orders_pd.symbol)
Explanation: 结果如上所示,下面从交易单中可以看到整个择时周期内策略只对三支股票进行了择时:
End of explanation
buy_factors = [{'xd': 21, 'class': AbuFactorBuyBreak, 'stock_pickers': stock_pickers},
{'xd': 42, 'class': AbuFactorBuyBreak, 'stock_pickers': stock_pickers}]
abu_result_tuple = run_loo_back(us_choice_symbols)
Explanation: 上面虽然也达到了选股的目的,但是整个择时周期内只运行了一次选股策略,即只完成了静态选股,后面的择时周期内很有可能随着时间的推进,选股目标发生了变化,为了解决这个问题,可以将选股策略序列做为择时因子策略的一个参数进行构造,如下所示,这样选股策略即做为择时策略的一个专属因子,它默认在择时周期内每一个月都会重新进行一次选股策略,即完成了动态选股(也可以设置参数改变周期),使用如下所示:
End of explanation
set(abu_result_tuple.orders_pd.symbol)
Explanation: 结果如上所示,下面从交易单中可以看到整个择时周期内策略对多支股票进行了择时,并非只有3支:
End of explanation
orders_pd = abu_result_tuple.orders_pd
date_ind = orders_pd.index
def print_month_trade(base_year, range_month):
month_fmt = lambda year, mt: '{}-0{}-01'.format(
year, mt) if mt < 10 else '{}-{}-01'.format(year, mt)
for month in range_month:
if month < 12:
next_month = month + 1
trade_year = base_year
else:
next_month = 1
trade_year = base_year + 1
print('{}-{}月选股交易品种{}'.format(trade_year, month,
set(orders_pd[(date_ind > month_fmt(trade_year, month)) &
(date_ind < month_fmt(trade_year, next_month))].symbol)))
print_month_trade(2014, np.arange(9, 13))
print_month_trade(2015, np.arange(1, 13))
print_month_trade(2016, np.arange(1, 8))
Explanation: 下面更进一步打印出择时周期内每一个月的择时交易目标,可以看到每一个月最多的交易目标数量是3个,即这个月选股结果的3个交易目标都发生了突破的情况,如下所示:
End of explanation
构建选股策略使用AbuPickStockNTop,n_top=3, 注意参数xd=20,
即选股分析周期为20天(选股目标为上一个月涨幅最高的3支股票)
stock_pickers = [{'class': AbuPickStockNTop,
'symbol_pool': us_choice_symbols, 'n_top': 3, 'xd': 20}]
买入因子AbuWeekMonthBuy参数is_buy_month=True在月末买入,
stock_pickers做为买入因子的专属选股因子进行动态选股
buy_factors = [{'class': AbuWeekMonthBuy, 'is_buy_month': True,
'stock_pickers': stock_pickers}]
卖出因子使用AbuFactorSellNDay,sell_n=20, 即针对买入的股票持有20个交易日(一个月)后卖出
sell_factors = [{'class': AbuFactorSellNDay, 'sell_n': 20, 'is_sell_today': True}]
# 开始回测
abu_result_tuple = run_loo_back(us_choice_symbols)
Explanation: 上面的选股策略分别使用静态和动态两种模式实现,有些类似静态市盈率和动态市盈率的计算。
2. 选股策略与择时策略的配合
本节的重点是讲解择时策略和选股策略的配合,达到1+1>2的目标,首先你要彻底理解你的策略,上面示例的狗骨策略选取的是正向涨幅最大的top支股票,为配合这个选股结果最简单的择时策略可以是每一个月进行一次买入操作,买入上一个月涨幅最高的top支股票,持有一个月后卖出,是一个趋势跟踪的策略:
End of explanation
pd.options.display.max_rows = 21
abu_result_tuple.orders_pd.filter(['symbol', 'buy_date', 'buy_factor',
'sell_date', 'sell_type_extra', 'profit'])[:21]
Explanation: 针对美股市场的回测如上所示,下面输出交易单可以看到每一个月在月底都买入了3支股票,持有一个月后卖出操作,如下:
End of explanation
# A股中的沙盒symbol数据
cn_choice_symbols = ['002230', '300104', '300059', '601766', '600085',
'600036', '600809', '000002', '002594']
# 和上面类似,只是symbol_pool=cn_choice_symbols
stock_pickers = [{'class': AbuPickStockNTop,
'symbol_pool': cn_choice_symbols, 'n_top': 3, 'xd': 20}]
buy_factors = [{'class': AbuWeekMonthBuy, 'is_buy_month': True,
'stock_pickers': stock_pickers}]
abu_result_tuple = run_loo_back(cn_choice_symbols)
Explanation: 下面将市场切换到沙盒数据中的A股以相同的选股,择时策略进行回测,如下:
End of explanation
stock_pickers = [{'class': AbuPickStockNTop,
'symbol_pool': cn_choice_symbols, 'n_top': 3,
'direction_top': -1, 'xd': 20}]
buy_factors = [{'class': AbuWeekMonthBuy, 'is_buy_month': True,
'stock_pickers': stock_pickers}]
abu_result_tuple = run_loo_back(cn_choice_symbols, only_info=True)
Explanation: 上面使用的择时策略AbuWeekMonthBuy本身是一个中性策略,没有明确的方向,即择时策略本身不属于趋势跟踪也不属于均值回复,通过搭配不同的选股策略才能明确择时策略本身的属性,如上例配合形成的趋势跟踪策略。
将上例AbuPickStockNTop选股策略中的direction_top=-1, AbuPickStockNTop选股策略中direction_top参数的意义为选取方向,默认为1,即选取涨幅最高的n_top个股票,传递-1即选取跌幅最高的n_top个股票,这样搭配上AbuWeekMonthBuy最终生效的策略将变成一个均值回复策略,买入上一个月跌幅最高的top支股票,每一个月进行一次买入操作, 持有一个月后卖出,如下所示:
End of explanation
xd=40: 为匹配择时策略AbuFactorBuyWD中的默认周期
direction_top=-1: 选取跌幅最高的n_top
stock_pickers = [{'class': AbuPickStockNTop,
'symbol_pool': us_choice_symbols, 'n_top': 3,
'direction_top': -1, 'xd': 40}]
买入因子AbuFactorBuyWD参数stock_pickers做为买入因子的专属选股因子进行动态选股
buy_factors = [{'class': AbuFactorBuyWD, 'stock_pickers': stock_pickers}]
# 卖出策略使用AbuFactorSellNDay,sell_n=1即只持有一天
sell_factors = [{'class': AbuFactorSellNDay, 'sell_n': 1, 'is_sell_today': True}]
abu_result_tuple = run_loo_back(us_choice_symbols,
start='2013-07-26', end='2016-07-26', only_info=True)
Explanation: 3. 进一步制造非均衡,提高交易概率优势
上面示例了中性择时策略配合选股策略分别形成趋势跟踪与均值回复策略,下面将示例通过选股策略配合原本带属性的择时策略,进一步在整个策略中制造非均衡,提高交易概率优势
下面在上一节使用的AbuFactorBuyWD的基础上配合使用AbuPickStockNTop做为选股策略,AbuFactorBuyWD的择时策略为
默认以40天为周期(8周)结合涨跌阀值计算周几适合买入
回测运行中每一个月重新计算一次上述的周几适合买入
在策略日任务中买入信号为:昨天下跌,今天开盘也下跌,且明天是计算出来的上涨概率大的'周几'
下面构建AbuPickStockNTop时也以xd=40天为一个周期,计算周期内跌幅最大的3个股票,即整个策略在上面3条择时策略的基础上又添加了一条如下:
4. 每一个月计算一次上两个月(xd=40)中跌幅最大的3支股票做为交易目标
实现如下所示:
End of explanation
buy_factors = [{'class': AbuFactorBuyWD}]
sell_factors = [{'class': AbuFactorSellNDay, 'sell_n': 1, 'is_sell_today': True}]
abu_result_tuple = run_loo_back(us_choice_symbols,
start='2013-07-26', end='2016-07-26', only_info=True)
Explanation: 回测结果上所示,下面使用上一节不使用选股策略进行配合的回测进行对比,如下:
End of explanation |
8,423 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple Aggregation
Step1: Pandas
Step2: What is the row sum?
Step3: Column sum?
Step4: Spark
Step5: How do we skip the header? How about using find()? What is Boolean value for true with find()?
Step6: Row Sum
Cast to integer and sum!
Step7: Column Sum
This one's a bit trickier, and portends ill for large, complex data sets (like example 5)...
Let's enumerate the list comprising each RDD "line" such that each value is indexed by the corresponding column number.
Step8: Notice how flatMap works here
Step9: Column sum with Spark.sql.dataframe
Step10: groupBy() without arguments groups by all columns | Python Code:
import numpy as np
data = np.arange(1000).reshape(100,10)
print data.shape
Explanation: Simple Aggregation
End of explanation
import pandas as pd
pand_tmp = pd.DataFrame(data,
columns=['x{0}'.format(i) for i in range(data.shape[1])])
pand_tmp.head()
Explanation: Pandas
End of explanation
pand_tmp.sum(axis=1)
Explanation: What is the row sum?
End of explanation
pand_tmp.sum(axis=0)
pand_tmp.to_csv('numbers.csv', index=False)
Explanation: Column sum?
End of explanation
import findspark
import os
findspark.init() # you need that before import pyspark.
import pyspark
sc = pyspark.SparkContext('local[4]', 'pyspark')
lines = sc.textFile('numbers.csv', 18)
for l in lines.take(3):
print l
lines.take(3)
type(lines.take(1))
Explanation: Spark
End of explanation
lines = lines.filter(lambda x: x.find('x') != 0)
for l in lines.take(2):
print l
data = lines.map(lambda x: x.split(','))
data.take(3)
Explanation: How do we skip the header? How about using find()? What is Boolean value for true with find()?
End of explanation
def row_sum(x):
int_x = map(lambda x: int(x), x)
return sum(int_x)
data_row_sum = data.map(row_sum)
print data_row_sum.collect()
print data_row_sum.count()
Explanation: Row Sum
Cast to integer and sum!
End of explanation
def col_key(x):
for i, value in enumerate(x):
yield (i, int(value))
tmp = data.flatMap(col_key)
tmp.take(15)
Explanation: Column Sum
This one's a bit trickier, and portends ill for large, complex data sets (like example 5)...
Let's enumerate the list comprising each RDD "line" such that each value is indexed by the corresponding column number.
End of explanation
tmp.take(3)
tmp = tmp.groupByKey()
for i in tmp.take(2):
print i, type(i)
data_col_sum = tmp.map(lambda x: sum(x[1]))
for i in data_col_sum.take(2):
print i
print data_col_sum.collect()
print data_col_sum.count()
Explanation: Notice how flatMap works here: the generator is returned per partition, meaning that the first element value of each tuple cycles.
End of explanation
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
sc
pyspark_df = sqlContext.createDataFrame(pand_tmp)
pyspark_df.take(2)
Explanation: Column sum with Spark.sql.dataframe
End of explanation
for i in pyspark_df.columns:
print pyspark_df.groupBy().sum(i).collect()
Explanation: groupBy() without arguments groups by all columns
End of explanation |
8,424 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quickstart
geoplot is a geospatial data visualization library designed for data scientists and geospatial analysts that just want to get things done. In this tutorial we will learn the basics of geoplot and see how it is used.
You can run this tutorial code yourself interactively using Binder.
Step1: The starting point for geospatial analysis is geospatial data. The standard way of dealing with such data in Python using geopandas—a geospatial data parsing library over the well-known pandas library.
Step2: geopandas represents data using a GeoDataFrame, which is just a pandas DataFrame with a special geometry column containing a geometric object describing the physical nature of the record in question
Step3: All functions in geoplot take a GeoDataFrame as input. To learn more about manipulating geospatial data, see the section Working with Geospatial Data.
Step4: If your data consists of a bunch of points, you can display those points using pointplot.
Step5: If you have polygonal data instead, you can plot that using a geoplot polyplot.
Step6: We can combine the these two plots using overplotting. Overplotting is the act of stacking several different plots on top of one another, useful for providing additional context for our plots
Step7: You might notice that this map of the United States looks very strange. The Earth, being a sphere, is impossible to potray in two dimensionals. Hence, whenever we take data off the sphere and place it onto a map, we are using some kind of projection, or method of flattening the sphere. Plotting data without a projection, or "carte blanche", creates distortion in your map. We can "fix" the distortion by picking a better projection.
The Albers equal area projection is one most common in the United States. Here's how you use it with geoplot
Step8: Much better! To learn more about projections check out the section of the tutorial on Working with Projections.
What if you want to create a webmap instead? This is also easy to do.
Step9: This is a static webmap. Interactive (scrolly-panny) webmaps are also possible
Step10: This map tells a clear story
Step11: geoplot comes equipped with a broad variety of visual options which can be tuned to your liking.
Step12: Let's look at a couple of other plot types available in geoplot (for the full list, see the Plot Reference).
Step13: This choropleth of population by state shows how much larger certain coastal states are than their peers in the central United States. A choropleth is the standard-bearer in cartography for showing information about areas because it's easy to make and interpret. | Python Code:
# Configure matplotlib.
%matplotlib inline
# Unclutter the display.
import pandas as pd; pd.set_option('max_columns', 6)
Explanation: Quickstart
geoplot is a geospatial data visualization library designed for data scientists and geospatial analysts that just want to get things done. In this tutorial we will learn the basics of geoplot and see how it is used.
You can run this tutorial code yourself interactively using Binder.
End of explanation
import geopandas as gpd
Explanation: The starting point for geospatial analysis is geospatial data. The standard way of dealing with such data in Python using geopandas—a geospatial data parsing library over the well-known pandas library.
End of explanation
import geoplot as gplt
usa_cities = gpd.read_file(gplt.datasets.get_path('usa_cities'))
usa_cities.head()
Explanation: geopandas represents data using a GeoDataFrame, which is just a pandas DataFrame with a special geometry column containing a geometric object describing the physical nature of the record in question: a POINT in space, a POLYGON in the shape of New York, and so on.
End of explanation
import geoplot as gplt
Explanation: All functions in geoplot take a GeoDataFrame as input. To learn more about manipulating geospatial data, see the section Working with Geospatial Data.
End of explanation
continental_usa_cities = usa_cities.query('STATE not in ["HI", "AK", "PR"]')
gplt.pointplot(continental_usa_cities)
Explanation: If your data consists of a bunch of points, you can display those points using pointplot.
End of explanation
contiguous_usa = gpd.read_file(gplt.datasets.get_path('contiguous_usa'))
gplt.polyplot(contiguous_usa)
Explanation: If you have polygonal data instead, you can plot that using a geoplot polyplot.
End of explanation
ax = gplt.polyplot(contiguous_usa)
gplt.pointplot(continental_usa_cities, ax=ax)
Explanation: We can combine the these two plots using overplotting. Overplotting is the act of stacking several different plots on top of one another, useful for providing additional context for our plots:
End of explanation
import geoplot.crs as gcrs
ax = gplt.polyplot(contiguous_usa, projection=gcrs.AlbersEqualArea())
gplt.pointplot(continental_usa_cities, ax=ax)
Explanation: You might notice that this map of the United States looks very strange. The Earth, being a sphere, is impossible to potray in two dimensionals. Hence, whenever we take data off the sphere and place it onto a map, we are using some kind of projection, or method of flattening the sphere. Plotting data without a projection, or "carte blanche", creates distortion in your map. We can "fix" the distortion by picking a better projection.
The Albers equal area projection is one most common in the United States. Here's how you use it with geoplot:
End of explanation
ax = gplt.webmap(contiguous_usa, projection=gcrs.WebMercator())
gplt.pointplot(continental_usa_cities, ax=ax)
Explanation: Much better! To learn more about projections check out the section of the tutorial on Working with Projections.
What if you want to create a webmap instead? This is also easy to do.
End of explanation
ax = gplt.webmap(contiguous_usa, projection=gcrs.WebMercator())
gplt.pointplot(continental_usa_cities, ax=ax, hue='ELEV_IN_FT', legend=True)
Explanation: This is a static webmap. Interactive (scrolly-panny) webmaps are also possible: see the demo for an example of one.
This map tells us that there are more cities on either coast than there are in and around the Rocky Mountains, but it doesn't tell us anything about the cities themselves. We can make an informative plot by adding hue to the plot:
End of explanation
ax = gplt.webmap(contiguous_usa, projection=gcrs.WebMercator())
gplt.pointplot(continental_usa_cities, ax=ax, hue='ELEV_IN_FT', cmap='terrain', legend=True)
Explanation: This map tells a clear story: that cities in the central United States have a higher ELEV_IN_FT then most other cities in the United States, especially those on the coast. Toggling the legend on helps make this result more interpretable.
To use a different colormap, use the cmap parameter:
End of explanation
ax = gplt.polyplot(
contiguous_usa, projection=gcrs.AlbersEqualArea(),
edgecolor='white', facecolor='lightgray',
figsize=(12, 8)
)
gplt.pointplot(
continental_usa_cities, ax=ax, hue='ELEV_IN_FT', cmap='Blues',
scheme='quantiles',
scale='ELEV_IN_FT', limits=(1, 10),
legend=True, legend_var='scale',
legend_kwargs={'frameon': False},
legend_values=[-110, 1750, 3600, 5500, 7400],
legend_labels=['-110 feet', '1750 feet', '3600 feet', '5500 feet', '7400 feet']
)
ax.set_title('Cities in the Continental United States by Elevation', fontsize=16)
Explanation: geoplot comes equipped with a broad variety of visual options which can be tuned to your liking.
End of explanation
gplt.choropleth(
contiguous_usa, hue='population', projection=gcrs.AlbersEqualArea(),
edgecolor='white', linewidth=1,
cmap='Greens', legend=True,
scheme='FisherJenks',
legend_labels=[
'<3 million', '3-6.7 million', '6.7-12.8 million',
'12.8-25 million', '25-37 million'
]
)
Explanation: Let's look at a couple of other plot types available in geoplot (for the full list, see the Plot Reference).
End of explanation
boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
collisions = gpd.read_file(gplt.datasets.get_path('nyc_collision_factors'))
ax = gplt.kdeplot(collisions, cmap='Reds', shade=True, clip=boroughs, projection=gcrs.AlbersEqualArea())
gplt.polyplot(boroughs, zorder=1, ax=ax)
Explanation: This choropleth of population by state shows how much larger certain coastal states are than their peers in the central United States. A choropleth is the standard-bearer in cartography for showing information about areas because it's easy to make and interpret.
End of explanation |
8,425 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning curve
Table of contents
Data preprocessing
Fitting random forest
Feature importance
Step1: Data preprocessing
Load simulation dataframe and apply specified quality cuts
Extract desired features from dataframe
Get separate testing and training datasets
Step2: Feature selection
Step3: Feature transform
Step4: Produce 10-fold CV learning curve | Python Code:
import sys
sys.path.append('/home/jbourbeau/cr-composition')
print('Added to PYTHONPATH')
import argparse
from collections import defaultdict
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn.apionly as sns
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score, learning_curve
import composition as comp
import composition.analysis.plotting as plotting
# Plotting-related
sns.set_palette('muted')
sns.set_color_codes()
color_dict = {'P': 'b', 'He': 'g', 'Fe': 'm', 'O': 'r'}
%matplotlib inline
Explanation: Learning curve
Table of contents
Data preprocessing
Fitting random forest
Feature importance
End of explanation
df_sim, cut_dict_sim = comp.load_dataframe(type_='sim', config='IT73', return_cut_dict=True)
selection_mask = np.array([True] * len(df_sim))
standard_cut_keys = ['lap_reco_success', 'lap_zenith', 'num_hits_1_30', 'IT_signal',
'max_qfrac_1_30', 'lap_containment', 'energy_range_lap']
for key in standard_cut_keys:
selection_mask *= cut_dict_sim[key]
df_sim = df_sim[selection_mask]
feature_list, feature_labels = comp.get_training_features()
print('training features = {}'.format(feature_list))
X_train_sim, X_test_sim, y_train_sim, y_test_sim, le = comp.get_train_test_sets(
df_sim, feature_list, comp_class=True, train_he=True, test_he=True)
print('number training events = ' + str(y_train_sim.shape[0]))
print('number testing events = ' + str(y_test_sim.shape[0]))
Explanation: Data preprocessing
Load simulation dataframe and apply specified quality cuts
Extract desired features from dataframe
Get separate testing and training datasets
End of explanation
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
pipeline = comp.get_pipeline('RF')
sfs = SFS(pipeline,
k_features=6,
forward=True,
floating=False,
scoring='accuracy',
print_progress=True,
cv=3,
n_jobs=10)
sfs = sfs.fit(X_train_sim, y_train_sim)
Explanation: Feature selection
End of explanation
X_train_sim = sfs.transform(X_train_sim)
X_test_sim = sfs.transform(X_test_sim)
Explanation: Feature transform
End of explanation
pipeline = comp.get_pipeline('RF')
train_sizes, train_scores, test_scores =\
learning_curve(estimator=pipeline,
X=X_train_sim,
y=y_train_sim,
train_sizes=np.linspace(0.1, 1.0, 10),
cv=10,
n_jobs=20,
verbose=3)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean,
color='b', linestyle='-',
marker='o', markersize=5,
label='training accuracy')
plt.fill_between(train_sizes,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='b')
plt.plot(train_sizes, test_mean,
color='g', linestyle='--',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(train_sizes,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='g')
plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.title('RF Classifier')
plt.legend()
# plt.ylim([0.8, 1.0])
plt.tight_layout()
plt.show()
pipeline.named_steps['classifier']
fig, axarr = plt.subplots(2, 2)
for max_depth, ax in zip([2, 5, 6, 10], axarr.flatten()):
print('max_depth = {}'.format(max_depth))
pipeline = comp.get_pipeline('RF')
params = {'classifier__max_depth': max_depth}
pipeline.set_params(**params)
train_sizes, train_scores, test_scores =\
learning_curve(estimator=pipeline,
X=X_train_sim,
y=y_train_sim,
train_sizes=np.linspace(0.1, 1.0, 10),
cv=10,
n_jobs=20,
verbose=0)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
ax.plot(train_sizes, train_mean,
color='b', linestyle='-',
marker='o', markersize=5,
label='training accuracy')
ax.fill_between(train_sizes,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='b')
ax.plot(train_sizes, test_mean,
color='g', linestyle='--',
marker='s', markersize=5,
label='validation accuracy')
ax.fill_between(train_sizes,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='g')
ax.grid()
ax.set_xlabel('Number of training samples')
ax.set_ylabel('Accuracy')
ax.set_title('max depth = {}'.format(max_depth))
ax.set_ylim([0.6, 0.9])
ax.legend()
plt.tight_layout()
plt.show()
Explanation: Produce 10-fold CV learning curve
End of explanation |
8,426 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
問題:去掉 list中不重複的數字
例如輸入 [ 1, 1, 2, 3, 3],2沒有重複出現,所以要去掉 2,回傳 [ 1, 1, 3, 3]
限制:只能用原生 python,numpy之類的東西不能用
解題想法:
1. 找出 list中,會重複出現的元素。可以用 count()方法來解
2. 開個空 list,把 1的結果存起來。可以用 append()方法
3. 寫個 for迴圈,把所有元素都跑過一次
Step1: 上面的程式碼邏輯清楚,不過有點落落長...
python有個典型的句型可以套用這樣的邏輯,句型如下:
[元素 for 元素 in list 條件]
條件通常是 if 敘述
所以剛剛那串落落長可以簡化為下面的 python code
Step2: 簡潔多了,而且邏輯輪廓也是相當清楚。這類句型稱為 comprehensive list
這樣的 pattern在 python和其他語言中還蠻常見,所以請多多利用。
接下來介紹一個叫 $\lambda$ 的東西。一般我們寫函式都是寫在主程式的外面,比方說
```
def 函式名()
Step3: 有沒有看不懂呢?~~沒錯!看不懂就對了...~~
lambda 的優點就是讓程式碼簡潔,不過也增加維護的門檻,使用時機存乎一心。
不過熟悉 $\lambda$語法對日後想要入門 functional programming 也許有幫助,各位參酌使用
換個想法
好,以上從落落長變一行。但這只是招式的變化,心法 (邏輯) 是沒有變化的。
這個解題思路的優點就是簡單直白~~你懂的... 蠢~~,缺點是它必須要掃過 list中所有的元素最後才能吐出結果,這導致許多不必要的運算。所以,po文前真的要多想三秒鐘啊~~ 也許會有更好的解...
所以呢,我想了另一個解題思路,盡量減少重複運算的問題,思路如下:
抓出輸入 list中,所有的組成元素,比方說 [1,1,2,3]的組成元素是 [1,2,3],而 [5,4,5,5,2]的組成元素是 [2,4,5]。可以用 set()方法得到
寫個 for迴圈,迴圈次數 = 組成元素個數
每次迴圈中,使用 count()方法去找符合該次組成元素的數量
如果數量 = 1,用 remove()方法移除這個組成元素
因為迴圈數降低了,不再需要跑整個 list,整體的效能將會提升,特別是當 list很長的時候
Step4: 想是這麼想,但是不是真的有用呢?讓我們來試試看...
現在我們有三種 (笨) 方法:
* Non_unique -- 簡單直白
* comlist -- comprehensive list法
* lambda_unique -- anonymous function法,使用 lambda
以及一個稍微聰明一點的方法:
* better_unique
接下來我們可以用 %timeit 這個東西來檢查這四個函式的效能如何...
Step5: 上面的訊息是說
Step6: 不論招式如何,笨方法的執行時間大約是 3.6~3.7 $\mu$s,而聰明法的執行時間則是 1.3~1.4 $\mu$s。
這告訴我們雖然招式不同,不過進到機器底層的計算是差不多的。用好的演算法對效能的提升很有幫助。
讓我們來看看當 list變很長的時候會怎樣
Step7: 果然,list一長,笨方法就 hold不住了,換個想法再試試看
不過要快,還要知道使用正確的工具,不然再怎麼神妙的劍法心法,威力還是比不上 ~~新阿姆斯特朗旋風噴射阿姆斯特朗砲~~ 的... 請看以下範例
Step8: 下面是 veky提供的另一解,基本上是做一個 tuple來放兩個 set,一個存看過的(seen),一個存不是唯一的 (nonunique)
但為什麼也可以這麼快我就不太懂,因為是 tuple?... Orz
Step9: 不像 veky那麼神,又想要加速的話,anaconda提供一個叫 numba的~~作弊器~~,在不改 code的情況下加速。用法也很簡單... 如下 | Python Code:
def Non_unique(numlist):
result=[]
for n in numlist:
n_replicate=numlist.count(n)
if n_replicate >= 2:
result.append(n)
return result
Explanation: 問題:去掉 list中不重複的數字
例如輸入 [ 1, 1, 2, 3, 3],2沒有重複出現,所以要去掉 2,回傳 [ 1, 1, 3, 3]
限制:只能用原生 python,numpy之類的東西不能用
解題想法:
1. 找出 list中,會重複出現的元素。可以用 count()方法來解
2. 開個空 list,把 1的結果存起來。可以用 append()方法
3. 寫個 for迴圈,把所有元素都跑過一次
End of explanation
def comlist(numlist):
return [num for num in numlist if numlist.count(num) > 1]
Explanation: 上面的程式碼邏輯清楚,不過有點落落長...
python有個典型的句型可以套用這樣的邏輯,句型如下:
[元素 for 元素 in list 條件]
條件通常是 if 敘述
所以剛剛那串落落長可以簡化為下面的 python code
End of explanation
lambda_unique=lambda numlist:[num for num in numlist if numlist.count(num)>1]
Explanation: 簡潔多了,而且邏輯輪廓也是相當清楚。這類句型稱為 comprehensive list
這樣的 pattern在 python和其他語言中還蠻常見,所以請多多利用。
接下來介紹一個叫 $\lambda$ 的東西。一般我們寫函式都是寫在主程式的外面,比方說
```
def 函式名():
....韓式內容....
烤肉
人蔘雞
年糕
return 帳單
主程式
主程式內容
call函式
```
不過~~人懶起來~~,規矩都不管了,有人就想說那可以把函式內嵌在主程式中嗎?
然後 $\lambda$就生出來了,這樣的函式又稱為 anonymous function,~~有沒有很神秘,很駭客呢~~...
廢話不多說,先介紹用法:
python
變數 = lambda 參數: 運算內容
在這個例子裡,參數是輸入的 list,運算內容就 copy剛剛前面的 comprehensive list用法,寫好後如下:
End of explanation
def better_unique(numlist):
for n in list(set(numlist)):
n_replicate=numlist.count(n)
if n_replicate ==1:
numlist.remove(n)
return numlist
Explanation: 有沒有看不懂呢?~~沒錯!看不懂就對了...~~
lambda 的優點就是讓程式碼簡潔,不過也增加維護的門檻,使用時機存乎一心。
不過熟悉 $\lambda$語法對日後想要入門 functional programming 也許有幫助,各位參酌使用
換個想法
好,以上從落落長變一行。但這只是招式的變化,心法 (邏輯) 是沒有變化的。
這個解題思路的優點就是簡單直白~~你懂的... 蠢~~,缺點是它必須要掃過 list中所有的元素最後才能吐出結果,這導致許多不必要的運算。所以,po文前真的要多想三秒鐘啊~~ 也許會有更好的解...
所以呢,我想了另一個解題思路,盡量減少重複運算的問題,思路如下:
抓出輸入 list中,所有的組成元素,比方說 [1,1,2,3]的組成元素是 [1,2,3],而 [5,4,5,5,2]的組成元素是 [2,4,5]。可以用 set()方法得到
寫個 for迴圈,迴圈次數 = 組成元素個數
每次迴圈中,使用 count()方法去找符合該次組成元素的數量
如果數量 = 1,用 remove()方法移除這個組成元素
因為迴圈數降低了,不再需要跑整個 list,整體的效能將會提升,特別是當 list很長的時候
End of explanation
#先隨便給個小 list
alist=[1, 1, 6, 7, 9, 2, 5, 4, 5, 2, 3]
%timeit Non_unique(alist)
Explanation: 想是這麼想,但是不是真的有用呢?讓我們來試試看...
現在我們有三種 (笨) 方法:
* Non_unique -- 簡單直白
* comlist -- comprehensive list法
* lambda_unique -- anonymous function法,使用 lambda
以及一個稍微聰明一點的方法:
* better_unique
接下來我們可以用 %timeit 這個東西來檢查這四個函式的效能如何...
End of explanation
%timeit comlist(alist)
%timeit lambda_unique(alist)
%timeit better_unique(alist)
Explanation: 上面的訊息是說:
1. 執行了 100000次, 記錄執行時間
2. 執行了 100000次, 記錄執行時間
3. 執行了 100000次, 記錄執行時間
4. 把 1. 2. 3.最短的時間拿出來,除以 100000,就得到一次計算所花的時間為 3.64 $\mu$s
timeit 依照執行時間的多寡自動分配,花時間的程式會執行少一點...
接著把其他的都做一做
End of explanation
# 亂數產生一個值域在 1~100,一次產生 10個亂數,重複兩千次的 list,所以 list 長度是兩萬。
# 長度可以自己改... 迴圈別改太多 (> 2000) 不然測試跑會很久
import random
x=[]
n=2000
for i in range(n):
y=random.sample(range(9*n),10+4)
x.extend(y)
print(len(set(x))/len(x)) #單一值和重複值的比例大約一半一半
y=x
%timeit Non_unique(y)
y=x
%timeit comlist(y)
y=x
%timeit lambda_unique(y)
y=x
%timeit better_unique(y)
Explanation: 不論招式如何,笨方法的執行時間大約是 3.6~3.7 $\mu$s,而聰明法的執行時間則是 1.3~1.4 $\mu$s。
這告訴我們雖然招式不同,不過進到機器底層的計算是差不多的。用好的演算法對效能的提升很有幫助。
讓我們來看看當 list變很長的時候會怎樣
End of explanation
# from Checkio "O(n) :-P" by "Veky"
# https://checkio.org/mission/non-unique-elements/publications/veky/python-3/on-p/?ordering=most_voted
from collections import Counter
def counter_unique(data):
nonunique = Counter(data) - Counter(set(data))
return [x for x in data if x in nonunique]
y=x
%timeit counter_unique(y)
Explanation: 果然,list一長,笨方法就 hold不住了,換個想法再試試看
不過要快,還要知道使用正確的工具,不然再怎麼神妙的劍法心法,威力還是比不上 ~~新阿姆斯特朗旋風噴射阿姆斯特朗砲~~ 的... 請看以下範例
End of explanation
# from Checkio "Two bins" by "Veky"
#https://checkio.org/mission/non-unique-elements/publications/veky/python-3/two-bins/?ordering=most_voted
def two_bins(sequence):
bins = seen, nonunique = set(), set() #指定 bins = (set(), set()), 第一個叫 seen, 第二個叫 nonunique
for number in sequence:bins[number in seen].add(number)
return [number for number in sequence if number in nonunique]
y=x
%timeit two_bins(y)
Explanation: 下面是 veky提供的另一解,基本上是做一個 tuple來放兩個 set,一個存看過的(seen),一個存不是唯一的 (nonunique)
但為什麼也可以這麼快我就不太懂,因為是 tuple?... Orz
End of explanation
from numba import jit
@jit
def fast_Non_unique(numlist):
result=[]
for n in numlist:
n_replicate=numlist.count(n)
if n_replicate >= 2:
result.append(n)
return result
y=x
%timeit fast_Non_unique(y)
from numba import jit
@jit
def fast_better_unique(numlist):
for n in list(set(numlist)):
n_replicate=numlist.count(n)
if n_replicate ==1:
numlist.remove(n)
return numlist
y=x
%timeit fast_better_unique(y)
Explanation: 不像 veky那麼神,又想要加速的話,anaconda提供一個叫 numba的~~作弊器~~,在不改 code的情況下加速。用法也很簡單... 如下
End of explanation |
8,427 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Train and test our model
| Python Code::
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
EPOCHS = 15
train_max=0
test_max=0
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
print(f"\nMaximum training accuracy: {train_max}\n")
print(f"\nMaximum test accuracy: {test_max}\n")
|
8,428 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Runs this to start from scratch
Both should return an error if no credentials were previously set and your are using the service account of the instance.
Step1: Authentication
As a developer, I want to interact with GCP via gcloud.
gcloud auth login (run from a Notebook terminal)
This obtains your credentials via a web flow and stores them in /root/.config/gcloud/credentials.db and for backward compatibility in /root/.config/gcloud/legacy_credentials/[YOUR_EMAIL]/adc.json
Now
Step2: NOTE
For BigQuery, if you run a query using the following, your identity should have the following IAM roles or similar | Python Code:
!gcloud auth revoke --quiet
!gcloud auth application-default revoke --quiet
Explanation: Runs this to start from scratch
Both should return an error if no credentials were previously set and your are using the service account of the instance.
End of explanation
# General
import google.auth
credentials, project_id = google.auth.default()
# Transparent for google.cloud libraries
from google.cloud import bigquery
client = bigquery.Client()
# If you try to run a query, this gets updated with values.
client.__dict__['_credentials'].__dict__
Explanation: Authentication
As a developer, I want to interact with GCP via gcloud.
gcloud auth login (run from a Notebook terminal)
This obtains your credentials via a web flow and stores them in /root/.config/gcloud/credentials.db and for backward compatibility in /root/.config/gcloud/legacy_credentials/[YOUR_EMAIL]/adc.json
Now:
- gcloud commands runs from the Notebook's cells finds your credentials automatically.
- Other code or SDK (Python, Java,...) not automatically picks up those credentials.
Reference: https://cloud.google.com/sdk/gcloud/reference/auth/login
As a developer, I want my code to interact with GCP via SDK.
gcloud auth application-default login (run using the GCP option in the navigation menu)
This obtains your credentials via a web flow and stores them in /root/.config/gcloud/application_default_credentials.json.
Now:
- Other code or SDK (Python, Java,...) finds the credentials automatically.
- Can run code locally which would normally run on a server without the need of a credentials file.
Reference: https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login
For more information, you can read the Google documentation or this excellent blog post.
Authenticate code running in the Notebook
End of explanation
import base64
CREDENTIALS_FILE = "/root/.config/gcloud/application_default_credentials.json"
def get_credentials_text():
if not os.path.isfile(CREDENTIALS_FILE):
print("\x1b[31m\nNo credentials defined. Run gcloud auth application-default login.\n\x1b[0m")
return
return open(CREDENTIALS_FILE, "r").read()
credentials_txt = get_credentials_text()
credentials_b64 = base64.b64encode(credentials_txt.encode('utf-8')).decode('utf-8')
# Can not have both credentialsFile and credentials set.
spark.conf.unset("credentialsFile")
spark.conf.set("credentials", credentials_b64)
print("\x1b[32m\nSpark is now authenticated on this Master node.\n\x1b[0m")
Explanation: NOTE
For BigQuery, if you run a query using the following, your identity should have the following IAM roles or similar:
- roles/bigquery.jobUser (Lower resource is Project) that includes the bigquery.jobs.create permission.
- roles/bigquery.dataViewer (Lower resource is Dataset) that includes bigquery.tables.getData permission.
py
query_job = client.query(QUERY)
rows = query_job.result()
Authenticate Spark
You can authenticate Spark using the credentials file or its content. Although you could use the file directly, workers would not have it locally because gcloud auth application-default login runs only for the Master. It means that the application_default_credentials.json file is only created on the Master node.
We have 3 options:
Option 1 [Recommended]: Read the file and pass the value as a string.
Option 2: Have the add-on to write the file to the master and workers. Requires proper permissions.
Option 3: Manually copy the file using a gcloud scp for example. Requires proper firewall access.
End of explanation |
8,429 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification in Sci-kit Learn
This code predicts the newsgroup from a list of 20 possible news groups. Its trainind on the commonly used 20-newsgroups dataset that is a "unusual" clasification dataset in that each newsgroup is very distinctive, leading to picking models that do better with this kind of data.
The code does the following
Step1: Load Data
Step2: Investigate Training Set
Step3: Test Set
Step4: Test Tokenization
CountVectorizer
Step5: Test transform on some short text
Step6: Convert to coo sparce matrix for easier display
Step7: Build inverse vocabulary
Step8: TFIDF
TfidfTransformer
Step9: Notice the following in the above values
Step10: Pipelines
Pipelines pass the output of one transform to the input of the next.
Pipeline
Step11: Function to Test a Pipeline
Step12: Importance of TFIDF Weighting
Step13: TfidfVectorizer combines CountVectorizer and TfidfTransformer
Step14: Hyper-parameter tests on SGDClassifier
Step15: Test Naive Bayes model
MultinomialNB
Step16: K-nearest neighbors model
KNeighborsClassifier
Step17: Nearest Centroid Model
NearestCentroid
Step18: Logistic Regression
This is same as to SGDClassifier with log loss, but uses different code/solver.
Step19: Most Influential Features
Step20: Tokenization Tests
Docs
Step21: Char N-grams
Step22: SVM Model
SVC
LinearSVC
Step27: Metadata Features
Code adapted from
Step28: Randomized Paramerter Search
RandomizedSearchCV
Step29: Random variable of 10^x with x uniformly distributed
Step30: Random Parameter Search
RandomizedSearchCV
Example
Step31: Plot Parameter Performance
Step32: All Tests | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.core.display import display, HTML
from IPython.display import Audio
import os
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer
from sklearn.pipeline import Pipeline
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import SGDClassifier
import time
display(HTML("<style>.container { width:97% !important; }</style>")) #Set width of iPython cells
Explanation: Classification in Sci-kit Learn
This code predicts the newsgroup from a list of 20 possible news groups. Its trainind on the commonly used 20-newsgroups dataset that is a "unusual" clasification dataset in that each newsgroup is very distinctive, leading to picking models that do better with this kind of data.
The code does the following:
1. counts words
2. weights word count features with TFIDF weighting
3. predicts the newsgroup from the weighted features
Models are optimized through:
1. Varying tokenization methods including character and word n-grams
2. Several model types
3. Randomized hyperparameter and tokenization option search
4. Ranking of several models so best models are visible
Code came from examples at:
1. http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html
2. http://scikit-learn.org/stable/auto_examples/hetero_feature_union.html
20 newsgroups dataset info is at http://scikit-learn.org/stable/datasets/index.html#the-20-newsgroups-text-dataset
Be sure to install the following (pip3 is python 3 and pip command will also work):
1. pip3 install sklearn
2. pip3 install pandas
2. pip3 install scipy
If I missed an instal and you get an import error, try doing a pip3 install <import name> . Note that the kernel for jupyter needs to be the same version/instalation of python you do the pip3 install in (python 3).
End of explanation
from sklearn.datasets import fetch_20newsgroups
# You can restrict the categories to simulate fewwer classes
#categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
#categories = ['comp.graphics', 'sci.med']
#categories = ['alt.atheism', 'talk.religion.misc']
categories=None
twenty_train = fetch_20newsgroups(subset='train',
categories=categories, shuffle=True, random_state=42)
twenty_test = fetch_20newsgroups(subset='test',
categories=categories, shuffle=True, random_state=42)
Explanation: Load Data
End of explanation
twenty_train.target_names
len(twenty_train.data)
len(twenty_train.filenames)
print(twenty_train.data[0])
twenty_train.target_names[twenty_train.target[0]]
twenty_train.target
len(twenty_train.target)
twenty_train.target_names
Explanation: Investigate Training Set
End of explanation
len(twenty_test.data)
len(twenty_test.data) / len(twenty_train.data)
print(twenty_test.data[10])
twenty_test.target_names[twenty_test.target[10]]
Explanation: Test Set
End of explanation
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
print('training examples = ' + str(len(twenty_train.data)))
print('vocabulary length = ' + str(len(count_vect.vocabulary_)))
print('transformed training text matrix shape = ' + str(X_train_counts.shape))
# vocabulary_ is dict of word string -> word index
list(count_vect.vocabulary_.items())[:50]
Explanation: Test Tokenization
CountVectorizer
End of explanation
text = ['The The rain in spain.', 'The brown brown fox.']
counts_matrix = count_vect.transform(text)
type(counts_matrix)
counts_matrix.data
counts_matrix.indptr
counts_matrix.indices
Explanation: Test transform on some short text
End of explanation
from scipy.sparse import coo_matrix
coo = coo_matrix(counts_matrix)
#print(np.stack((coo.row, coo.col, coo.data)))
df = pd.DataFrame({'row':coo.row, 'column':coo.col, 'count':coo.data},
columns=['row','column', 'count'])
df
Explanation: Convert to coo sparce matrix for easier display
End of explanation
inverse_vocabulary=np.empty(len(count_vect.vocabulary_), dtype=object)
for key,value in count_vect.vocabulary_.items():
inverse_vocabulary[value] = key
for i in coo.col:
print(i, inverse_vocabulary[i])
words = [inverse_vocabulary[i] for i in coo.col]
df = pd.DataFrame({'row':coo.row, 'column':coo.col, 'count':coo.data, 'word':words})
df = df[ ['row','column', 'count', 'word'] ]
df
Explanation: Build inverse vocabulary
End of explanation
tfidf = TfidfTransformer()
tfidf.fit(X_train_counts) # compute weights on whole training set
tfidf_matrix = tfidf.transform(counts_matrix) # transform examples
print( 'tfidf_matrix type = ' + str(type(tfidf_matrix)) )
print( 'tfidf_matrix shape = ' + str(tfidf_matrix.shape) )
coo_tfidf = coo_matrix(tfidf_matrix)
words_tfidf = [inverse_vocabulary[i] for i in coo_tfidf.col]
df = pd.DataFrame({'row':coo_tfidf.row, 'column':coo_tfidf.col,
'value':coo_tfidf.data, 'word':words_tfidf})
df = df[ ['row','column', 'value', 'word'] ]
df
import scipy
scipy.sparse.linalg.norm(tfidf_matrix, axis=1)
Explanation: TFIDF
TfidfTransformer
End of explanation
tfidf.idf_.shape
words = ['the', 'very', 'car', 'vector', 'africa']
for word in words:
word_index = count_vect.vocabulary_[word]
print(word + ' = ' + str(tfidf.idf_[word_index]))
Explanation: Notice the following in the above values:
1. frequent words like 'the' and 'in' are down weighted
2. Each matrix row has a euclidian norm of 1.0
Tfidf Weights
End of explanation
text_clf = Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', MultinomialNB()),
])
text_clf.fit(twenty_train.data, twenty_train.target)
predicted = text_clf.predict(twenty_test.data)
np.mean(predicted == twenty_test.target)
from sklearn import metrics
print(metrics.classification_report(twenty_test.target, predicted,
target_names=twenty_test.target_names))
df = pd.DataFrame(metrics.confusion_matrix(twenty_test.target, predicted))
df
Explanation: Pipelines
Pipelines pass the output of one transform to the input of the next.
Pipeline
End of explanation
class QAResults:
def init(self, Y_expected, Y_predicted, X, class_labels):
self.Y_expected = Y_expected
self.Y_predicted = Y_predicted
self.X = X
self.class_labels = class_labels
self.next_error_index = 0
self.errors = np.nonzero(Y_expected - Y_predicted) # returns indexs of non-zero elements
print(self.errors)
def display_next(self):
if(self.next_error_index >= self.errors[0].shape[0]):
self.next_error_index = 0 # cycle back around
X_index = self.errors[0][self.next_error_index]
print('index = ', X_index )
print('Expected = ' + self.class_labels[self.Y_expected[X_index]])
print('Predicted = ' + self.class_labels[self.Y_predicted[X_index]])
print('\nX['+ str(X_index) +']')
print( self.X[X_index] )
self.next_error_index +=1
def header(str):
display(HTML('<h3>'+str+'</h3>'))
tests = {}
def test_pipeline(pipeline, name=None, verbose=True, qa_test = None):
start=time.time()
pipeline.fit(twenty_train.data, twenty_train.target)
predicted = pipeline.predict(twenty_test.data)
elapsed_time = (time.time() - start)
accuracy = np.mean(predicted == twenty_test.target)
f1 = metrics.f1_score(twenty_test.target, predicted, average='macro')
print( 'F1 = %.3f \nAccuracy = %.3f\ntime = %.3f sec.' % (f1, accuracy, elapsed_time))
if(verbose):
header('Classification Report')
print(metrics.classification_report(twenty_test.target, predicted,
target_names=twenty_test.target_names, digits=3))
header('Confusion Matrix (row=expected, col=predicted)')
df = pd.DataFrame(metrics.confusion_matrix(twenty_test.target, predicted))
df.columns = twenty_test.target_names
df['Expected']=twenty_test.target_names
df.set_index('Expected',inplace=True)
display(df)
if name is not None:
tests[name]={'Name':name, 'Accuracy':accuracy, 'F1':f1, 'Time':elapsed_time,
'Details':pipeline.get_params(deep=True)}
if qa_test is not None:
qa_test.init( twenty_test.target, predicted, twenty_test.data, twenty_test.target_names)
qa_test=QAResults()
test_pipeline(text_clf, qa_test=qa_test)
qa_test.display_next() # re-run this cell to see next error
Explanation: Function to Test a Pipeline
End of explanation
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()), # <-- with weighting
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-5, random_state=42,
max_iter=40)),
]), verbose=False)
test_pipeline(Pipeline([('cvect', CountVectorizer()), # <-- no weighting
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-5, random_state=42,
max_iter=40)),
]), verbose=False)
Explanation: Importance of TFIDF Weighting
End of explanation
test_pipeline(Pipeline([('tfidf_v', TfidfVectorizer()),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 )),
]), verbose=False)
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 )),
]), verbose=False, name='hinge loss')
Explanation: TfidfVectorizer combines CountVectorizer and TfidfTransformer
End of explanation
# hinge loss is a linear SVM
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 )),
]), verbose=False, name='hinge loss')
# log loss is logistic regression
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier(loss='log', penalty='l2',
alpha=1e-6, random_state=42,
max_iter=10 )),
]), verbose=False, name='log loss')
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier(loss='log', penalty='none',
alpha=1e-6, random_state=42,
max_iter=10 )),
]), verbose=False, name='log loss no regularization')
Explanation: Hyper-parameter tests on SGDClassifier
End of explanation
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', MultinomialNB()),
]), verbose=False, name='MultinomialNB')
Explanation: Test Naive Bayes model
MultinomialNB
End of explanation
from sklearn.neighbors import KNeighborsClassifier
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('knn', KNeighborsClassifier(n_neighbors=5)),
]), verbose=False, name='KNN n=5')
from sklearn.neighbors import KNeighborsClassifier
for n in range(1,7):
print( '\nn = ' + str(n))
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('knn', KNeighborsClassifier(n_neighbors=n)),
]), verbose=False, name='KNN n=' + str(n))
from sklearn.neighbors import KNeighborsClassifier
for n in range(1,7):
print( '\nn = ' + str(n))
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('knn', KNeighborsClassifier(n_neighbors=n, weights='distance')),
]), verbose=False, name='KNN n=' + str(n) + ' distance weights')
Explanation: K-nearest neighbors model
KNeighborsClassifier
End of explanation
from sklearn.neighbors.nearest_centroid import NearestCentroid
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', NearestCentroid(metric='euclidean')),
]), verbose=False, name='NearestCentroid')
Explanation: Nearest Centroid Model
NearestCentroid
End of explanation
from sklearn.linear_model import LogisticRegression
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', LogisticRegression(solver='sag', multi_class='multinomial', n_jobs=-1)),
]), verbose=False, name='LogisticRegression multinomial')
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', LogisticRegression(solver='sag', multi_class='ovr',n_jobs=-1)),
]), verbose=False, name='LogisticRegression ovr')
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', LogisticRegression(C=10, solver='sag', multi_class='multinomial', n_jobs=-1, max_iter=200)),
]), verbose=False, name='LogisticRegression multinomial C=10')
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', LogisticRegression(C=100, solver='sag', multi_class='multinomial', n_jobs=-1, max_iter=200)),
]), verbose=False, name='LogisticRegression multinomial C=100')
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', LogisticRegression(C=1000, solver='sag', multi_class='multinomial', n_jobs=-1, max_iter=200)),
]), verbose=False, name='LogisticRegression multinomial C=1000')
Explanation: Logistic Regression
This is same as to SGDClassifier with log loss, but uses different code/solver.
End of explanation
p = Pipeline([('cvect', CountVectorizer(stop_words='english', ngram_range=(1,2),
max_df = 0.88, min_df=1)),
('tfidf', TfidfTransformer(sublinear_tf=True)),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=4e-4, random_state=42,
max_iter=40 )),
])
test_pipeline(p, verbose=False)
# Adapted from https://stackoverflow.com/questions/11116697/how-to-get-most-informative-features-for-scikit-learn-classifiers
def show_most_informative_features(vectorizer, clf, class_labels, n=50):
feature_names = vectorizer.get_feature_names()
for row in range(clf.coef_.shape[0]):
coefs_with_fns = sorted(zip(clf.coef_[row], feature_names))
top = zip(coefs_with_fns[:n], coefs_with_fns[:-(n + 1):-1])
print( '\nclass = ' + class_labels[row])
l = [[fn_1, coef_1,fn_2,coef_2] for (coef_1, fn_1), (coef_2, fn_2) in top]
df = pd.DataFrame(l, columns=['Smallest Word', 'Smallest Weight', 'Largest Word', 'Largest Weight'])
display(df)
show_most_informative_features(p.named_steps['cvect'], p.named_steps['sgdc'], twenty_train.target_names)
p = Pipeline([('cvect', CountVectorizer( analyzer='char', ngram_range=(5,5),
max_df = 0.88, min_df=1)),
('tfidf', TfidfTransformer(sublinear_tf=True)),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=4e-4, random_state=42,
max_iter=40 )),
])
test_pipeline(p, verbose=False)
show_most_informative_features(p.named_steps['cvect'], p.named_steps['sgdc'], twenty_train.target_names)
Explanation: Most Influential Features
End of explanation
test_pipeline(Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer(use_idf=False)),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 )),
]), verbose=False, name='use_idf=False')
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 )),
]), verbose=False, name='stopwords')
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english', ngram_range=(1,2),
max_df = 0.8, min_df=2)),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 )),
]), verbose=False, name='ngram_range=(1,2)')
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer(norm=None)),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 )),
]), verbose=False, name='norm = None')
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer(sublinear_tf=True)),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 )),
]), verbose=False, name='sublinear_tf=True')
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer(norm='l1')),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 )),
]), verbose=False, name='norm=l1')
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english', ngram_range=(1,3),
max_df = 0.8, min_df=2)),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 )),
]), verbose=False, name='ngram_range=(1,3)')
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english', ngram_range=(1,2),
max_df = 0.8, min_df=2)),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 )),
]), verbose=False, name = 'ngram_range=(1,2), max_df = 0.8, min_df=2')
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english', ngram_range=(1,2),
max_df = 0.8, min_df=2)),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 , n_jobs=-1)),
]), verbose=False, name='ngram_range=(1,2), max_df = 0.8, min_df=2')
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english', token_pattern="[a-zA-Z]{3,}")),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 , n_jobs=-1)),
]), verbose=False, name='no numbers')
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english', token_pattern="[a-zA-Z0-9.-]{1,}")),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 , n_jobs=-1)),
]), verbose=False, name='dots in words')
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english',
ngram_range=(1,2), min_df=3,max_df=0.8)),
('tfidf', TfidfTransformer(sublinear_tf=True)),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 , n_jobs=-1)),
]), verbose=False, name='ngram_range=(1,2), min_df=3,max_df=0.8, sublinear_tf')
Explanation: Tokenization Tests
Docs:
CountVectorizer
TfidfTransformer
End of explanation
for n in range(3,8,1):
print('\nN-grams = '+ str(n))
test_pipeline(Pipeline([('cvect', CountVectorizer(analyzer='char', ngram_range=(n,n),
min_df=2, max_df=0.9)),
('tfidf', TfidfTransformer(sublinear_tf=True)),
('sgdc', SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-4, random_state=42,
max_iter=40 , n_jobs=-1)),
]), verbose=False, name='char ngram ' + str(n) + ' + sublinear_tf')
Explanation: Char N-grams
End of explanation
from sklearn.svm import SVC
from sklearn.decomposition import TruncatedSVD
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer()),
('svd', TruncatedSVD(n_components=300)),
('svc', SVC(kernel='linear', C=10)),
]), verbose=False, name='SVC + TruncatedSVD')
from sklearn.svm import SVC
from sklearn.decomposition import TruncatedSVD
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer()),
('svc', SVC(kernel='linear')),
]), verbose=False, name='SVC')
from sklearn.svm import LinearSVC
from sklearn.decomposition import TruncatedSVD
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer()),
('sgdc', LinearSVC(C=10)),
]), verbose=False, name='LinearSVC, C=10')
from sklearn.svm import LinearSVC
from sklearn.decomposition import TruncatedSVD
test_pipeline(Pipeline([('cvect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer()),
('sgdc', LinearSVC(C=1)),
]), verbose=False, name='LinearSVC, C=1')
Explanation: SVM Model
SVC
LinearSVC
End of explanation
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction import DictVectorizer
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import StandardScaler
from sklearn.datasets.twenty_newsgroups import strip_newsgroup_footer
from sklearn.datasets.twenty_newsgroups import strip_newsgroup_quoting
class ItemSelector(BaseEstimator, TransformerMixin):
For data grouped by feature, select subset of data at a provided key.
The data is expected to be stored in a 2D data structure, where the first
index is over features and the second is over samples. i.e.
>> len(data[key]) == n_samples
Please note that this is the opposite convention to scikit-learn feature
matrixes (where the first index corresponds to sample).
ItemSelector only requires that the collection implement getitem
(data[key]). Examples include: a dict of lists, 2D numpy array, Pandas
DataFrame, numpy record array, etc.
>> data = {'a': [1, 5, 2, 5, 2, 8],
'b': [9, 4, 1, 4, 1, 3]}
>> ds = ItemSelector(key='a')
>> data['a'] == ds.transform(data)
ItemSelector is not designed to handle data grouped by sample. (e.g. a
list of dicts). If your data is structured this way, consider a
transformer along the lines of `sklearn.feature_extraction.DictVectorizer`.
Parameters
----------
key : hashable, required
The key corresponding to the desired value in a mappable.
def __init__(self, key):
self.key = key
def fit(self, x, y=None):
return self
def transform(self, data_dict):
return data_dict[self.key]
class TextStats(BaseEstimator, TransformerMixin):
Extract features from each document for DictVectorizer
def fit(self, x, y=None):
return self
def transform(self, posts):
return [{'length': len(text),
'num_sentences': text.count('.'),
'num_questions': text.count('?') ,
'num_dollars': text.count('$'),
'num_percent': text.count('%'),
'num_exclamations': text.count('!'),
}
for text in posts]
class SubjectBodyExtractor(BaseEstimator, TransformerMixin):
Extract the subject & body from a usenet post in a single pass.
Takes a sequence of strings and produces a dict of sequences. Keys are
`subject` and `body`.
def fit(self, x, y=None):
return self
def transform(self, posts):
features = np.recarray(shape=(len(posts),),
dtype=[('subject', object), ('body', object)])
for i, text in enumerate(posts):
headers, _, bod = text.partition('\n\n')
bod = strip_newsgroup_footer(bod)
bod = strip_newsgroup_quoting(bod)
features['body'][i] = bod
prefix = 'Subject:'
sub = ''
for line in headers.split('\n'):
if line.startswith(prefix):
sub = line[len(prefix):]
break
features['subject'][i] = sub
return features
class Printer(BaseEstimator, TransformerMixin):
{Print inputs}
def __init__(self, count):
self.count = count
def fit(self, x, y=None):
return self
def transform(self, x):
if(self.count >0):
self.count-=1
print(x[0])
return x
pipeline = Pipeline([
# Extract the subject & body
('subjectbody', SubjectBodyExtractor()),
# Use FeatureUnion to combine the features from subject and body
('union', FeatureUnion(n_jobs=-1,
transformer_list=[
# Pipeline for pulling features from the post's subject line
('subject', Pipeline([
('selector', ItemSelector(key='subject')),
('tfidf', TfidfVectorizer(min_df=1)),
])),
# Pipeline for standard bag-of-words model for body
('body_bow', Pipeline([
('selector', ItemSelector(key='body')),
('tfidf', TfidfVectorizer()),
])),
# Pipeline for pulling ad hoc features from post's body
('body_stats', Pipeline([
('selector', ItemSelector(key='body')),
('stats', TextStats()), # returns a list of dicts
('cvect', DictVectorizer()), # list of dicts -> feature matrix
#('print',Printer(1)),
# scaling is needed so SGD model will have balanced feature gradients
('scale', StandardScaler(copy=False, with_mean=False, with_std=True) ),
#('print2',Printer(1)),
])),
],
# weight components in FeatureUnion
transformer_weights={
'subject': 1,
'body_bow': 1,
'body_stats': .1,
},
)),
#('print',Printer(1)),
# Use a SVC classifier on the combined features
#('svc', SVC(kernel='linear')),
('sgdc', SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, random_state=42, max_iter=5 )),
])
test_pipeline(pipeline, verbose=False, name='metadata')
Explanation: Metadata Features
Code adapted from: http://scikit-learn.org/stable/auto_examples/hetero_feature_union.html
End of explanation
from scipy.stats import expon as sp_expon
from scipy.stats import randint as sp_randint
from scipy.stats import uniform as sp_uniform
Explanation: Randomized Paramerter Search
RandomizedSearchCV
End of explanation
r = sp_uniform(loc=5,scale=2).rvs(size=1000*1000)
fig, ax = plt.subplots(1, 1, figsize=(12, 5))
ax.hist(r, bins=100)
plt.show()
def geometric_sample(power_min, power_max, sample_size):
dist = sp_uniform(loc=power_min, scale=power_max-power_min)
return np.power(10, dist.rvs(size=sample_size))
geometric_sample(1,6,50)
Explanation: Random variable of 10^x with x uniformly distributed
End of explanation
from sklearn.model_selection import RandomizedSearchCV
pipeline = Pipeline([('cvect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('sgdc', SGDClassifier( random_state=42 )),
])
#ngram_range=(1,2), max_df = 0.8, min_df=2
param_dist = {"cvect__stop_words": [None,'english'],
"cvect__ngram_range": [(1,1),(1,2)],
"cvect__min_df": sp_randint(1, 6),
"cvect__max_df": sp_uniform(loc=0.5, scale=0.5), # range is (loc, loc+scale)
"tfidf__sublinear_tf": [True,False],
"tfidf__norm": [None, 'l1', 'l2'],
"sgdc__max_iter": sp_randint(5, 40),
"sgdc__loss": ['hinge','log'],
"sgdc__alpha": geometric_sample(-8,-3,10000),
}
# n_iter - number of random models to evaluate
# n_jobs = -1 to run in parallel on all cores
# cv = 4 , 4-fold cross validation
# scoring='f1_macro' , averages the F1 for each target class
rs = RandomizedSearchCV(pipeline, param_distributions=param_dist,
n_iter=5, n_jobs=-1, cv=3, return_train_score=False,
verbose=1, scoring='f1_macro', random_state=42)
test_pipeline(rs, verbose=False, name='Random Parameter Search')
Audio(url='./Beep 2.wav', autoplay=True)
#pd.get_option("display.max_columns")
pd.set_option("display.max_columns", 40)
header('Best')
display( pd.DataFrame.from_dict(rs.best_params_, orient= 'index') )
header('All Results')
df = pd.DataFrame(rs.cv_results_)
df = df.sort_values(['rank_test_score'])
display(df)
Explanation: Random Parameter Search
RandomizedSearchCV
Example
End of explanation
df = df.apply(pd.to_numeric, errors='ignore')
prefix = 'param_'
param_col = [col for col in df.columns if col.startswith(prefix) ]
for col in param_col:
name = col[len(prefix):]
header(name)
if(df[col].dtype == np.float64 or df[col].dtype == np.int64):
print( 'scatter')
df.plot(kind='scatter', x=col, y='mean_test_score', figsize=(15,10))
plt.show()
else:
mean = df[[col,'mean_test_score']].fillna(value='None').groupby(col).mean()
mean.plot(kind='bar', figsize=(10,10))
plt.show()
Explanation: Plot Parameter Performance
End of explanation
tests_df=pd.DataFrame.from_dict(tests, orient= 'index')
tests_df = tests_df.drop(['Name'], axis=1)
tests_df.columns=[ 'F1', 'Accuracy', 'Time (sec.)', 'Details']
tests_df = tests_df.sort_values(by=['F1'], ascending=False)
display(tests_df)
header('Best Model')
display(tests_df.head(1))
print(tests_df['Details'].values[0])
plt.figure(figsize=(13,5))
tests_df['F1'].plot(kind='bar', ylim=(0.6,None))
Audio(url='./Beep 2.wav', autoplay=True)
Explanation: All Tests
End of explanation |
8,430 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statements Assessment Test
Lets test your knowledge!
Use for, split(), and if to create a Statement that will print out words that start with 's'
Step1: Use range() to print all the even numbers from 0 to 10.
Step2: Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.
Step3: Go through the string below and if the length of a word is even print "even!"
Step4: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".
Step5: Use List Comprehension to create a list of the first letters of every word in the string below | Python Code:
st = 'Print only the words that start with s in this sentence'
#Code here
Explanation: Statements Assessment Test
Lets test your knowledge!
Use for, split(), and if to create a Statement that will print out words that start with 's':
End of explanation
#Code Here
Explanation: Use range() to print all the even numbers from 0 to 10.
End of explanation
#Code in this cell
[]
Explanation: Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.
End of explanation
st = 'Print every word in this sentence that has an even number of letters'
#Code in this cell
Explanation: Go through the string below and if the length of a word is even print "even!"
End of explanation
#Code in this cell
Explanation: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".
End of explanation
st = 'Create a list of the first letters of every word in this string'
#Code in this cell
Explanation: Use List Comprehension to create a list of the first letters of every word in the string below:
End of explanation |
8,431 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encoding and Decoding Simple Data Types
Step1: Encoding, then re-decoding may not give exactly the same type of object
Step2: you can see that, tuple become list
Human-consumable vs. Compact Output
you can sort output
Step3: You can specify a value for indent, so the output is formatted
Step4: Make it more compact
Step5: The separators argument to dumps() should be a tuple containing the strings to separate items in a list and keys from values in a dictionary. The default is (', ', '
Step6: Working with custom Types
Convert to known type
Step7: convert_to_buitin_type convert the MyObj class instance into a dictionary which JSON can endode
To decode the results and create a MyObj() instance, use the object_hook argument to loads() to tie in to the decoder so the class can be imported from the module and used to create the instance.
Step8: Encoder and Decoder Classes
Besides the convenience functions already covered, the json module provides classes for encoding and decoding. Using the classes directly gives access to extra APIs for customizing their behavior.
The JSONEncoder uses an iterable interface for producing “chunks” of encoded data, making it easier to write to files or network sockets without having to represent an entire data structure in memory.
Step9: To encode arbitrary objects, override the default() method with an implementation similar to the one used in convert_to_builtin_type().
Step10: Decoding
Step11: Working with Streams and Files
All of the examples so far have assumed that the encoded version of the entire data structure could be held in memory at one time. With large data structures, it may be preferable to write the encoding directly to a file-like object. The convenience functions load() and dump() accept references to a file-like object to use for reading or writing
Step12: Mixed Data Streams
JSONDecoder includes raw_decode(), a method for decoding a data structure followed by more data, such as JSON data with trailing text. The return value is the object created by decoding the input data, and an index into that data indicating where decoding left off.
Step13: Unfortunately, this only works if the object appears at the beginning of the input. | Python Code:
data = [{'a': 'A', 'b': (2, 4), 'c': 3.0}]
print('DATA:', repr(data))
data_string = json.dumps(data)
print('JSON:', data_string)
print(type(data_string))
Explanation: Encoding and Decoding Simple Data Types
End of explanation
data = [{'a': 'A', 'b': (2, 4), 'c': 3.0}]
print('DATA :', data)
data_string = json.dumps(data)
print('ENCODED:', data_string)
decoded = json.loads(data_string)
print('DECODED:', decoded)
print('ORIGINAL:', type(data[0]['b']))
print('DECODED :', type(decoded[0]['b']))
Explanation: Encoding, then re-decoding may not give exactly the same type of object
End of explanation
import json
data = [{'a': 'A', 'c': 3.0, 'b': (2, 4)}]
print('DATA:', repr(data))
unsorted = json.dumps(data)
print('JSON:', unsorted)
print('SORT:', json.dumps(data, sort_keys=True))
first = json.dumps(data, sort_keys=False)
second = json.dumps(data, sort_keys=True)
print('UNSORTED MATCH:', unsorted == first)
print('SORTED MATCH :', first == second)
Explanation: you can see that, tuple become list
Human-consumable vs. Compact Output
you can sort output
End of explanation
data = [{'a': 'A', 'b': (2, 4), 'c': 3.0}]
print('DATA:', repr(data))
print('NORMAL:', json.dumps(data, sort_keys=True))
print('INDENT:', json.dumps(data, sort_keys=True, indent=2))
Explanation: You can specify a value for indent, so the output is formatted
End of explanation
data = [{'a': 'A', 'b': (2, 4), 'c': 3.0}]
print('DATA:', repr(data))
print('repr(data) :', len(repr(data)))
plain_dump = json.dumps(data)
print('dumps(data) :', len(plain_dump))
small_indent = json.dumps(data, indent=2)
print('dumps(data, indent=2) :', len(small_indent))
with_separators = json.dumps(data, separators=(',', ':'))
print('dumps(data, separators):', len(with_separators))
Explanation: Make it more compact
End of explanation
import json
data = [{'a': 'A', 'b': (2, 4), 'c': 3.0, ('d',): 'D tuple'}]
print('First attempt')
try:
print(json.dumps(data))
except TypeError as err:
print('ERROR:', err)
print()
print('Second attempt')
print(json.dumps(data, skipkeys=True))
Explanation: The separators argument to dumps() should be a tuple containing the strings to separate items in a list and keys from values in a dictionary. The default is (', ', ': '). By removing the whitespace, a more compact output is produced
Encoding Dictonaries
The JSON format expect the key to a dictionary to be string. Trying to encode a dictionary with non-string types as keys produces a TypeError. One way to work around that limitation is to tell the encoder to skip over non-string kyes using the skipkeys argument
End of explanation
class MyObj:
def __init__(self, s):
self.s = s
def __repr__(self):
return '<MyObj({})>'.format(self.s)
obj = MyObj('instance value goes here')
print('First attempt')
try:
print(json.dumps(obj))
except TypeError as err:
print('ERROR:', err)
def convert_to_builtin_type(obj):
print('default(', repr(obj), ')')
# Convert objects to a dictionary of their representation
d = {
'__class__': obj.__class__.__name__,
'__module__': obj.__module__,
}
d.update(obj.__dict__)
return d
print()
print('With default')
print(json.dumps(obj, default=convert_to_builtin_type))
Explanation: Working with custom Types
Convert to known type
End of explanation
def dict_to_object(d):
if '__class__' in d:
class_name = d.pop('__class__')
module_name = d.pop('__module__')
module = __import__(module_name)
print('MODULE:', module.__name__)
class_ = getattr(module, class_name)
print('CLASS:', class_)
args = {
key: value
for key, value in d.items()
}
print('INSTANCE ARGS:', args)
inst = class_(**args)
else:
inst = d
return inst
encoded_object = '''
[{"s": "instance value goes here",
"__module__": "__main__", "__class__": "MyObj"}]
'''
myobj_instance = json.loads(
encoded_object,
object_hook=dict_to_object,
)
print(myobj_instance)
Explanation: convert_to_buitin_type convert the MyObj class instance into a dictionary which JSON can endode
To decode the results and create a MyObj() instance, use the object_hook argument to loads() to tie in to the decoder so the class can be imported from the module and used to create the instance.
End of explanation
encoder = json.JSONEncoder()
data = [{'a': 'A', 'b': (2, 4), 'c': 3.0}]
for part in encoder.iterencode(data):
print('PART:', part)
Explanation: Encoder and Decoder Classes
Besides the convenience functions already covered, the json module provides classes for encoding and decoding. Using the classes directly gives access to extra APIs for customizing their behavior.
The JSONEncoder uses an iterable interface for producing “chunks” of encoded data, making it easier to write to files or network sockets without having to represent an entire data structure in memory.
End of explanation
class MyEncoder(json.JSONEncoder):
def default(self, obj):
print('default(', repr(obj), ')')
# Convert objects to a dictionary of their representation
d = {
'__class__': obj.__class__.__name__,
'__module__': obj.__module__,
}
d.update(obj.__dict__)
return d
obj = MyObj('internal data')
print(obj)
print(MyEncoder().encode(obj))
Explanation: To encode arbitrary objects, override the default() method with an implementation similar to the one used in convert_to_builtin_type().
End of explanation
class MyDecoder(json.JSONDecoder):
def __init__(self):
json.JSONDecoder.__init__(
self,
object_hook=self.dict_to_object,
)
def dict_to_object(self, d):
if '__class__' in d:
class_name = d.pop('__class__')
module_name = d.pop('__module__')
module = __import__(module_name)
print('MODULE:', module.__name__)
class_ = getattr(module, class_name)
print('CLASS:', class_)
args = {
key: value
for key, value in d.items()
}
print('INSTANCE ARGS:', args)
inst = class_(**args)
else:
inst = d
return inst
encoded_object = '''
[{"s": "instance value goes here",
"__module__": "__main__", "__class__": "MyObj"}]
'''
myobj_instance = MyDecoder().decode(encoded_object)
print(myobj_instance)
Explanation: Decoding:
End of explanation
import io
data = [{'a': 'A', 'b': (2, 4), 'c': 3.0}]
f = io.StringIO()
json.dump(data, f)
print(f.getvalue())
Explanation: Working with Streams and Files
All of the examples so far have assumed that the encoded version of the entire data structure could be held in memory at one time. With large data structures, it may be preferable to write the encoding directly to a file-like object. The convenience functions load() and dump() accept references to a file-like object to use for reading or writing
End of explanation
decoder = json.JSONDecoder()
def get_decoded_and_remainder(input_data):
obj, end = decoder.raw_decode(input_data)
remaining = input_data[end:]
return (obj, end, remaining)
encoded_object = '[{"a": "A", "c": 3.0, "b": [2, 4]}]'
extra_text = 'This text is not JSON.'
print('JSON first:')
data = ' '.join([encoded_object, extra_text])
obj, end, remaining = get_decoded_and_remainder(data)
print('Object :', obj)
print('End of parsed input :', end)
print('Remaining text :', repr(remaining))
print()
print('JSON embedded:')
try:
data = ' '.join([extra_text, encoded_object, extra_text])
obj, end, remaining = get_decoded_and_remainder(data)
except ValueError as err:
print('ERROR:', err)
Explanation: Mixed Data Streams
JSONDecoder includes raw_decode(), a method for decoding a data structure followed by more data, such as JSON data with trailing text. The return value is the object created by decoding the input data, and an index into that data indicating where decoding left off.
End of explanation
!ls
# shows the data reformatted in order
! python -m json.tool example.json
# ses --sort-keys to sort the mapping keys before printing the output.
! python -m json.tool --sort-keys example.json
Explanation: Unfortunately, this only works if the object appears at the beginning of the input.
End of explanation |
8,432 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In today's post we will take a look at the NLP classification task. One of the simpler algorithms is Bag-Of-Words. Each word is one-hot encoded, then the words of a document are averaged and put through the classifier.
As a dataset we are going to use movie reviews which can be downloaded from Kaggle. A word of disclaimer
Step1: AUC of Receiver Operating Characteristic curve shows 0.87 which is a decent margin vs random binary classification. Given we have two classes
Step2: A bit better, although I am not too happy with the 'scientifc' method here. Better choice will be to use a parameter grid search over dev set defined by cross validation method, but I'll reserve this for the next post. For a moment this should suffice.
Let's take a look where the classifier fails on a random data sample. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from bs4 import BeautifulSoup
from matplotlib import pyplot as plt
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.feature_selection import SelectKBest, chi2
from sklearn.metrics import roc_curve, auc
# load data
df = pd.read_csv( 'labeledTrainData.tsv', sep='\t' )
# convert the data
df.insert( 3, 'converted', df.iloc[ :, 2 ].apply( lambda x: BeautifulSoup( x ).get_text() ) )
print( 'available columns: {0}'.format( df.columns ) )
# train test / ratio of 0.66
tt_index = np.random.binomial( 1, 0.66, size=df.shape[0] )
train = df[ tt_index == 1 ]
test = df[ tt_index == 0 ]
vectorizer = TfidfVectorizer( encoding='latin1' )
vectorizer.fit_transform( train.iloc[ :, 3 ] )
# prepare data
X_train = vectorizer.transform( train.iloc[ :, 3 ] )
y_train = train.iloc[ :, 1 ]
X_test = vectorizer.transform( test.iloc[ :, 3 ] )
y_test = test.iloc[ :, 1 ]
# let's take a look how input classes are distributed.
# Having more or less equall frequency will help predictor training
train.hist( column=(1) )
plt.show()
ch2 = SelectKBest(chi2, k=100 )
X_train = ch2.fit_transform( X_train, y_train ).toarray()
X_test = ch2.transform( X_test ).toarray()
# we're going to use Gradient Boosted Tree classifier. These methods showed good performance on few Kaggle competitions
clf = GradientBoostingClassifier( n_estimators=100, learning_rate=1.0, max_depth=5, random_state=0 )
clf.fit(X_train, y_train)
y_score = clf.decision_function( X_test )
fpr, tpr, thresholds = roc_curve( y_test.ravel(), y_score.ravel() )
roc_auc = auc( fpr, tpr )
# Plot Precision-Recall curve
plt.clf()
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.plot( fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc )
plt.legend(loc="lower right")
plt.show()
Explanation: In today's post we will take a look at the NLP classification task. One of the simpler algorithms is Bag-Of-Words. Each word is one-hot encoded, then the words of a document are averaged and put through the classifier.
As a dataset we are going to use movie reviews which can be downloaded from Kaggle. A word of disclaimer: the code below is partially based on the sklearn tutorial as well as on very good NLP course CS224d from Stanford University
End of explanation
from sklearn.svm import SVC
clf2 = SVC( kernel='linear' )
clf2.fit( X_train, y_train )
y_score = clf2.decision_function( X_test )
fpr, tpr, thresholds = roc_curve( y_test.ravel(), y_score.ravel() )
roc_auc = auc( fpr, tpr )
# Plot Precision-Recall curve
plt.clf()
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.plot( fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc )
plt.legend(loc="lower right")
plt.show()
Explanation: AUC of Receiver Operating Characteristic curve shows 0.87 which is a decent margin vs random binary classification. Given we have two classes: good review or bad review, it make sense to try a linear hyperplane classifier: SVM.
End of explanation
y_pred = clf2.predict( X_test )
y_pred[ 0:10 ]
y_test[ 0:10 ]
test.iloc[ 1, 3 ]
Explanation: A bit better, although I am not too happy with the 'scientifc' method here. Better choice will be to use a parameter grid search over dev set defined by cross validation method, but I'll reserve this for the next post. For a moment this should suffice.
Let's take a look where the classifier fails on a random data sample.
End of explanation |
8,433 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PubChemPy examples
Table of Contents
1. Introduction
2. Getting Started
2. Getting Started
Retrieving a Compound
Retrieving information about a specific Compound in the PubChem database is simple.
Begin by importing PubChemPy
Step1: Let’s get the Compound with CID 5090
Step2: Now we have a Compound object called c. We can get all the information we need from this object
Step3: Searching
What if you don’t know the PubChem CID of the Compound you want? Just use the get_compounds() function
Step4: The first argument is the identifier, and the second argument is the identifier type, which must be one of name, smiles, sdf, inchi, inchikey or formula. It looks like there are 4 compounds in the PubChem Database that have the name Glucose associated with them. Let’s take a look at them in more detail
Step5: It looks like they all have different stereochemistry information.
Retrieving the record for a SMILES string is just as easy | Python Code:
import pubchempy as pcp
Explanation: PubChemPy examples
Table of Contents
1. Introduction
2. Getting Started
2. Getting Started
Retrieving a Compound
Retrieving information about a specific Compound in the PubChem database is simple.
Begin by importing PubChemPy:
End of explanation
c = pcp.Compound.from_cid(5090)
c
Explanation: Let’s get the Compound with CID 5090:
End of explanation
print(c.molecular_formula)
print(c.molecular_weight)
print(c.isomeric_smiles)
print(c.xlogp)
print(c.iupac_name)
print(c.synonyms)
Explanation: Now we have a Compound object called c. We can get all the information we need from this object:
End of explanation
results = pcp.get_compounds('Glucose', 'name')
results
Explanation: Searching
What if you don’t know the PubChem CID of the Compound you want? Just use the get_compounds() function:
End of explanation
for compound in results:
print compound.isomeric_smiles
Explanation: The first argument is the identifier, and the second argument is the identifier type, which must be one of name, smiles, sdf, inchi, inchikey or formula. It looks like there are 4 compounds in the PubChem Database that have the name Glucose associated with them. Let’s take a look at them in more detail:
End of explanation
pcp.get_compounds('C1=CC2=C(C3=C(C=CC=N3)C=C2)N=C1', 'smiles')
Explanation: It looks like they all have different stereochemistry information.
Retrieving the record for a SMILES string is just as easy:
End of explanation |
8,434 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 2 - Implementation of Shaffer et al
Due January 25 at 8 PM
Step1: (1) Estimation of a sample mean from a normally distributed variable.
Let us assume that a true distribution of a process is described by the normal distribution with $\mu=5$ and $\sigma=1$. You have a measurement technique that allows you to sample n points from this distribution. In Matlab this is a random number generator whose numbers will be chosen from the desired normal distribution by using the call normrnd(mu, sigma, [1, n]). Sample from this normal distribution from n=1 to 50 (I.e. n=1
Step2: (1a) Plot the standard deviation of the estimate of the sample mean versus n. Add a second line which is 1/sqrt(n). Describe what this tells you about the relationship between n and your power to estimate the underlying mean.
Step3: This shows that the standard deviation of the sample mean (also called the standard error) follows a $1/\sqrt{n}$ relationship.
1b. Plot the boxplot for the sample means for all values n. Using words, interpret what the boxplot view of the 1000 trials for n=1 means and what the trends in the boxplot demonstrate compared to the plot in 1a (I.e. What information do you gain or lose in the two different plotting schemes)?
Step4: The box plot shows that the values with higher n converge on the true mean (5 in this case). The box plot shows the overall distribution of values in greater detail, while the standard plot is easier to read for a single value plotted over the x-axis.
1c. For n=3, plot the histogram of the mean for the 1000 trials. Use the Kolmogorov-Smirnov test to see if this sample distribution is normal (hint you will need to translate this to the standard normal distribution). Report the sample mean and sample standard deviation, the p-value from the test, and whether you would reject the null hypothesis.
Step5: We do not reject the null hypothesis.
1d. Repeat 1c but for n=20. What changes when the number of samples increases?
Step6: Nothing changes in this case. For both low and high n the sample mean is normally distributed.
(2) Weibull distribution. Now we will explore sampling from an alternate distribution type.
(2a) Sample the Weibull distribution with parameters a = 1, 1000 times. Plot the histogram of these values. Describe the shape of this histogram in words. Is it anything like the normal distribution?
Step7: Doesn't look anything like a normal distribution. Much heavier right tail.
(2b) As in problem 1, plot a boxplot of the sample distribution of the Weibull with A=1,B=1 from n=1
Step8: Doesn't differ from 1b.
(2c) For n=3, plot the histogram of the sample means. What is this distribution, is it Weibull or normal? Report your test results.
Step9: This distribution is closer to normal.
2d. Repeat 2c and 2d for n=20 (don’t include the plots, but do include the test result for normality and explain the impact of the number of samples n, on normality).
Step10: Also looks normally distributed.
2e. Repeat 2c but with A=10 and B=2 (I.e plot the histogram of the calculated sample means for 1000 trials of n=3). What is this distribution, Weibull or normal? Why does it look different than in 1c?
Step11: (3) Differential expression . In this problem you will use the two-sample t-test to explore what differential hypothesis testing looks like in known standards and how multiple hypothesis correction effects the number of false positives and negatives from these tests.
Distribution 1, normal with mu=1, sigma=1
Distribution 2, normal with mu=3, sigma=1
Step12: 3a. False Negative
Step13: In reality these are two distributions, so the test has missed an occasion when we should have rejected the null hypothesis.
3b. False Positives
Step14: These are the same distribution, so the null hypothesis is true.
3c. Repeat 3b but 1000 times. What is the number of false positives? Predict the number of false positives you would get if you compared samples from the same distribution 10,000 times and explain why.
Step15: Number of false positives is p-value * trials, so proportional to the number of trials run.
3d. Now sweep n from 3 to 30 and report the number of false positives and false negatives for each n when you run 100 comparisons. (Provide this in a table format). Please explain the trend you see and interpret its meaning.
Step16: Number of false positives is not dependent upon $n$, while the number of false negatives is.
3e. For n=3, suggest how the number of false negatives changes according to sigma for the two distributions and test this. Report your new values and sigma and the number of false negatives in 100 tests.
Step17: The number of false negatives increases with sigma.
(3f) Lastly, perform 3d for p < 0.01 instead of p < 0.05. How does this influence the rate of false positives and negatives? How might you use this when performing many tests?
Step18: This decreases the number of false positives but increases the number of false negatives.
(5) Shaffer et al
In this excercise we're going to explore some basic concepts of statistics, and use them to build up to some more advanced ideas. To examine these ideas we're going to consider a classic of molecular biology—the Luria-Delbrück experiment.
Step19: (5a) First, we need to build up a distribution of outcomes for what an experiment would look like if it followed the Luria-Delbruck process.
Fill in the function below keeping track of normal and mutant cells. Then, make a second function, CVofNRuns, that runs the experiment 3000 times. You can assume a culture size of 120000 cells, and mutation rate of 0.0001 per cell per generation. What does the distribution of outcomes look like?
Step20: (5b) Compare the distribution of outcomes between the two replicates of the experiment using the 2-sample KS test. Are they consistent with one another?
Hint
Step21: (5c) Compare the distribution of outcomes between the experiment and model. Are our results consistent with resistance arising through a Luria-Delbruck related process? | Python Code:
# This line tells matplotlib to include plots here
% matplotlib inline
import numpy as np # We'll need numpy later
from scipy.stats import kstest, ttest_ind, ks_2samp, zscore
import matplotlib.pyplot as plt # This lets us access the pyplot functions
Explanation: Week 2 - Implementation of Shaffer et al
Due January 25 at 8 PM
End of explanation
# Initial code here
numRepeats = 1000
mu, sigma = 5.0, 1.0
n = 50
sampleMean = np.empty((n, numRepeats))
nVec = np.array(range(1, n+1))
for i in range(numRepeats):
for j in range(n):
sampleMean[j, i] = np.mean(np.random.normal(loc=mu, scale=sigma, size=(j + 1, )))
Explanation: (1) Estimation of a sample mean from a normally distributed variable.
Let us assume that a true distribution of a process is described by the normal distribution with $\mu=5$ and $\sigma=1$. You have a measurement technique that allows you to sample n points from this distribution. In Matlab this is a random number generator whose numbers will be chosen from the desired normal distribution by using the call normrnd(mu, sigma, [1, n]). Sample from this normal distribution from n=1 to 50 (I.e. n=1:50). Create a plot for the standard deviation of the calculated mean from each n when you repeat the sampling 1000 times each. (i.e. You will repeat your n observations 1000 times and will calculate the sample mean for each of the 1000 trials).
End of explanation
# Answer to 1a here
plt.plot(nVec, np.std(sampleMean, axis=1), label='stddev sample mean')
plt.plot(nVec, 1./np.sqrt(nVec), 'r', label='1/sqrt(n)');
plt.title('Stdev of mean estimate v. n, 100 trials');
plt.ylabel('Stdev');
plt.xlabel('n');
Explanation: (1a) Plot the standard deviation of the estimate of the sample mean versus n. Add a second line which is 1/sqrt(n). Describe what this tells you about the relationship between n and your power to estimate the underlying mean.
End of explanation
# Answer to 1b here
plt.boxplot(np.transpose(sampleMean));
plt.ylabel('Values');
plt.xlabel('n');
Explanation: This shows that the standard deviation of the sample mean (also called the standard error) follows a $1/\sqrt{n}$ relationship.
1b. Plot the boxplot for the sample means for all values n. Using words, interpret what the boxplot view of the 1000 trials for n=1 means and what the trends in the boxplot demonstrate compared to the plot in 1a (I.e. What information do you gain or lose in the two different plotting schemes)?
End of explanation
# Answer to 1c here
sampleVec = sampleMean[4, :]
plt.hist(sampleVec)
plt.xlabel('Value')
plt.ylabel('Density')
# Normalize sample mean and stdev
P = kstest(zscore(sampleVec), 'norm')[1]
print(P)
Explanation: The box plot shows that the values with higher n converge on the true mean (5 in this case). The box plot shows the overall distribution of values in greater detail, while the standard plot is easier to read for a single value plotted over the x-axis.
1c. For n=3, plot the histogram of the mean for the 1000 trials. Use the Kolmogorov-Smirnov test to see if this sample distribution is normal (hint you will need to translate this to the standard normal distribution). Report the sample mean and sample standard deviation, the p-value from the test, and whether you would reject the null hypothesis.
End of explanation
# Answer to 1d here
sampleVec = sampleMean[21, :]
plt.hist(sampleVec)
plt.xlabel('Value')
plt.ylabel('Density')
# Normalize sample mean and stdev
P = kstest(zscore(sampleVec), 'norm')[1]
print(P)
Explanation: We do not reject the null hypothesis.
1d. Repeat 1c but for n=20. What changes when the number of samples increases?
End of explanation
# Answer 2a here
Wvec = np.random.weibull(1, size=(1000, ))
plt.hist(Wvec);
Explanation: Nothing changes in this case. For both low and high n the sample mean is normally distributed.
(2) Weibull distribution. Now we will explore sampling from an alternate distribution type.
(2a) Sample the Weibull distribution with parameters a = 1, 1000 times. Plot the histogram of these values. Describe the shape of this histogram in words. Is it anything like the normal distribution?
End of explanation
# Answer 2b here
sampleMean = np.empty((1000, n))
for i in range(sampleMean.shape[0]):
for j in range(n):
sampleMean[i, j] = np.mean(np.random.weibull(1.0, size=(j + 1, )))
plt.boxplot(sampleMean);
nVec = np.arange(1, n+1)
plt.plot(nVec, np.std(sampleMean, axis=0), label='stddev sample mean')
plt.plot(nVec, 1./np.sqrt(nVec), 'r', label='1/sqrt(n)');
plt.title('Stdev of mean estimate v. n, 1000 trials');
plt.ylabel('Stdev');
plt.xlabel('n');
Explanation: Doesn't look anything like a normal distribution. Much heavier right tail.
(2b) As in problem 1, plot a boxplot of the sample distribution of the Weibull with A=1,B=1 from n=1:50. How does this differ from the plot in 1b and why? Plot the standard deviations of the sample means versus n. Is this any different?
End of explanation
sampleVec = sampleMean[2, :]
plt.hist(sampleVec)
plt.xlabel('Value')
plt.ylabel('Density')
# Normalize sample mean and stdev
Pnorm = kstest(zscore(sampleVec), 'norm')[1]
Pweib = kstest(sampleVec, 'expon')[1]
# Scipy and numpy's weibull distributions are different, but a weibull(a=1)
# is the same as an exponential distribution.
print(Pnorm)
print(Pweib)
Explanation: Doesn't differ from 1b.
(2c) For n=3, plot the histogram of the sample means. What is this distribution, is it Weibull or normal? Report your test results.
End of explanation
sampleVec = sampleMean[19, :]
plt.hist(sampleVec)
plt.xlabel('Value')
plt.ylabel('Density')
# Normalize sample mean and stdev
Pnorm = kstest(zscore(sampleVec), 'norm')[1]
Pweib = kstest(zscore(sampleVec), 'expon')[1]
print(Pnorm)
print(Pweib)
Explanation: This distribution is closer to normal.
2d. Repeat 2c and 2d for n=20 (don’t include the plots, but do include the test result for normality and explain the impact of the number of samples n, on normality).
End of explanation
# Answer to 2f
# The distribution changes shape but the same outcomes, of the sampling distribution
# morphing to look normally distributed as N increases, hold.
Explanation: Also looks normally distributed.
2e. Repeat 2c but with A=10 and B=2 (I.e plot the histogram of the calculated sample means for 1000 trials of n=3). What is this distribution, Weibull or normal? Why does it look different than in 1c?
End of explanation
dOne = lambda n: np.random.normal(loc=1.0, scale=1.0, size=(n, ))
dTwo = lambda n: np.random.normal(loc=3.0, scale=1.0, size=(n, ))
Explanation: (3) Differential expression . In this problem you will use the two-sample t-test to explore what differential hypothesis testing looks like in known standards and how multiple hypothesis correction effects the number of false positives and negatives from these tests.
Distribution 1, normal with mu=1, sigma=1
Distribution 2, normal with mu=3, sigma=1
End of explanation
def falseNeg(n=3, nTrial=100, p=0.05):
compare = np.empty((nTrial, ))
for ii, _ in enumerate(compare):
compare[ii] = ttest_ind(dOne(n), dTwo(n), equal_var=False)[1]
return sum(compare > p)
print(falseNeg())
Explanation: 3a. False Negative: Using n=3, perform 100 comparisons of distribution 1 versus distribution 2 with an alpha=0.05. Anytime you fail to reject the hypothesis it is a false negative. Why is this a false negative? Report the number of false negatives from your 100 tests.
Hint: It'd be helpful to define a function that does this for you at this point.
End of explanation
def falsePos(n=3, nTrial=100, p=0.05):
compare = np.empty((nTrial, ))
for ii in range(len(compare)):
compare[ii] = ttest_ind(dOne(n), dOne(n), equal_var=False)[1]
return sum(compare < p)
print(falsePos())
Explanation: In reality these are two distributions, so the test has missed an occasion when we should have rejected the null hypothesis.
3b. False Positives: Using n=3, perform 100 comparisons of distribution 1 versus distribution 1 with an alpha=0.05. Anytime you reject the hypothesis this is a false positive. Why is this a false positive? Report the number of false positives from your 100 tests.
End of explanation
print(falsePos(nTrial=1000))
print(falsePos(nTrial=10000))
Explanation: These are the same distribution, so the null hypothesis is true.
3c. Repeat 3b but 1000 times. What is the number of false positives? Predict the number of false positives you would get if you compared samples from the same distribution 10,000 times and explain why.
End of explanation
nVec = np.array(range(3, 31))
fPos = np.empty(nVec.shape)
fNeg = np.empty(nVec.shape)
for nn, nItem in enumerate(nVec):
fPos[nn] = falsePos(n=nItem)
fNeg[nn] = falseNeg(n=nItem)
plt.plot(nVec, fPos);
plt.plot(nVec, fNeg);
Explanation: Number of false positives is p-value * trials, so proportional to the number of trials run.
3d. Now sweep n from 3 to 30 and report the number of false positives and false negatives for each n when you run 100 comparisons. (Provide this in a table format). Please explain the trend you see and interpret its meaning.
End of explanation
dThree = lambda n: np.random.normal(loc=3.0, scale=2.0, size=(n, ))
def falseNegB(n=3, nTrial=100):
compare = np.empty((nTrial, ))
for ii, _ in enumerate(compare):
compare[ii] = ttest_ind(dOne(n), dThree(n), equal_var=False)[1]
return sum(compare > 0.05)
print(falseNegB())
Explanation: Number of false positives is not dependent upon $n$, while the number of false negatives is.
3e. For n=3, suggest how the number of false negatives changes according to sigma for the two distributions and test this. Report your new values and sigma and the number of false negatives in 100 tests.
End of explanation
nVec = np.array(range(3, 31))
fPos = np.empty(nVec.shape)
fNeg = np.empty(nVec.shape)
for nn, nItem in enumerate(nVec):
fPos[nn] = falsePos(n=nItem, p=0.01)
fNeg[nn] = falseNeg(n=nItem, p=0.01)
plt.plot(nVec, fPos);
plt.plot(nVec, fNeg);
Explanation: The number of false negatives increases with sigma.
(3f) Lastly, perform 3d for p < 0.01 instead of p < 0.05. How does this influence the rate of false positives and negatives? How might you use this when performing many tests?
End of explanation
repOne = np.loadtxt("data/wk2/expt_rep1.csv")
repTwo = np.loadtxt("data/wk2/expt_rep2.csv")
Explanation: This decreases the number of false positives but increases the number of false negatives.
(5) Shaffer et al
In this excercise we're going to explore some basic concepts of statistics, and use them to build up to some more advanced ideas. To examine these ideas we're going to consider a classic of molecular biology—the Luria-Delbrück experiment.
End of explanation
# Runs the simulation a bunch of times, and looks for how often the fano (cv/mean) comes out to one side
def simLuriaDelbruck(cultureSize, mutationRate):
nCells, nMuts = 1, 0 # Start with 1 non-resistant cell
for _ in range(np.int(np.floor(np.log2(cultureSize)))): # num of gens
nCells = 2 * nCells # Double the number of cells, simulating division
newMuts = np.random.poisson(nCells * mutationRate) # de novo
nMuts = 2 * nMuts + newMuts # Previous mutants divide and add
nCells = nCells - newMuts # Non-resistant pop goes down by newMuts
return nMuts
def CVofNRuns(N, cultureSize, mutationRate):
return np.fromiter((simLuriaDelbruck(cultureSize, mutationRate) for x in range(N)), dtype = np.int)
cvs = CVofNRuns(3000, 120000, 0.0001)
plt.hist(cvs, bins=30);
Explanation: (5a) First, we need to build up a distribution of outcomes for what an experiment would look like if it followed the Luria-Delbruck process.
Fill in the function below keeping track of normal and mutant cells. Then, make a second function, CVofNRuns, that runs the experiment 3000 times. You can assume a culture size of 120000 cells, and mutation rate of 0.0001 per cell per generation. What does the distribution of outcomes look like?
End of explanation
ks_2samp(repOne/np.mean(repOne), repTwo/np.mean(repTwo))
Explanation: (5b) Compare the distribution of outcomes between the two replicates of the experiment using the 2-sample KS test. Are they consistent with one another?
Hint: Each experiment varies slightly in the amount of time it was run. The absolute values of the numbers doesn't matter, so much as the variation of them. You'll need to correct for this by dividing by the mean of the results.
End of explanation
ks_2samp(repOne/np.mean(repOne), cvs/np.mean(cvs))
Explanation: (5c) Compare the distribution of outcomes between the experiment and model. Are our results consistent with resistance arising through a Luria-Delbruck related process?
End of explanation |
8,435 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow training of an artificial neural network to recognize handwritten digits in the MNIST dataset and export it to Oracle RDBMS
This notebook contains the preparation steps for the notebook MNIST_oracle_plsql.ipynb where you can find the steps for deploying a neural network serving engine in Oracle using PL/SQL
Author
Step1: Definition of the neural network
Step2: Train the network
The training uses 55000 images with labels
It is performed over 30000 iterations using mini batch size of 100 images
Step3: Learning exercise
Step4: Extracting the test images and labels as numpy arrays
Step7: Example of how to run the neural network "manually" using the tensor values extracted into numpy arrays
Step8: Visual test that the predicted value is indeed correct
Step9: Transfer of the tensors and test data into Oracle tables
For the following you should have access to a (test) Oracle database. This procedure has been tested with Oracle 11.2.0.4 and 12.1.0.2 on Linux.
To keep the test isolated you can create a dedicated user (suggested name, mnist) for the data transfer, as follows
Step10: Transfer the matrixes W0 and W1 into the table tensors (which must be precreated as described above)
Step11: Transfer the vectors b0 and b1 into the table "tensors" (the table is expected to exist on the DB, create it using the SQL described above)
Step12: Transfer the test data with images and labels into the table "testdata" (the table is expected to exist on the DB, create it using the SQL described above) | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Import data
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_string('data_dir', '/tmp/data/', 'Directory for storing data')
# Load training and test data sets with labels
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
Explanation: TensorFlow training of an artificial neural network to recognize handwritten digits in the MNIST dataset and export it to Oracle RDBMS
This notebook contains the preparation steps for the notebook MNIST_oracle_plsql.ipynb where you can find the steps for deploying a neural network serving engine in Oracle using PL/SQL
Author: Luca.Canali@cern.ch - July 2016
Initialize the environment and load the training set
Credits: the code for defining and training the neural network is adapted (with extensions) from the Google TensorFlow tutorial https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_softmax.py
End of explanation
# define and initialize the tensors
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
W0 = tf.Variable(tf.truncated_normal([784, 100], stddev=0.1))
b0 = tf.Variable(tf.zeros([100]))
W1 = tf.Variable(tf.truncated_normal([100, 10], stddev=0.1))
b1 = tf.Variable(tf.zeros([10]))
# Feed forward neural network with one hidden layer
# y0 is the hidden layer with sigmoid activation
y0 = tf.sigmoid(tf.matmul(x, W0) + b0)
# y1 is the output layer (softmax)
# y1[n] is the predicted probability that the input image depicts number 'n'
y1 = tf.nn.softmax(tf.matmul(y0, W1) + b1)
# The the loss function is defined as cross_entropy
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y1), reduction_indices=[1]))
# train the network using gradient descent
train_step = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cross_entropy)
# start a TensorFlow interactive session
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
Explanation: Definition of the neural network:
The following defines a basic feed forward neural network with one hidden layer
Other standard techniques used are the definition of cross entropy as loss function and the use of gradient descent as optimizer
End of explanation
batch_size = 100
train_iterations = 30000
# There are mnist.train.num_examples=55000 images in the train sample
# train in batches of 'batch_size' images at a time
# Repeat for 'train_iterations' number of iterations
# Training batches are randomly calculated as each new epoch starts
for i in range(train_iterations):
batch = mnist.train.next_batch(100)
train_data = feed_dict={x: batch[0], y_: batch[1]}
train_step.run(train_data)
# Test the accuracy of the trained network
correct_prediction = tf.equal(tf.argmax(y1, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Accuracy of the trained network over the test images: %s" %
accuracy.eval({x: mnist.test.images, y_: mnist.test.labels}))
Explanation: Train the network
The training uses 55000 images with labels
It is performed over 30000 iterations using mini batch size of 100 images
End of explanation
# There are 2 matrices and 2 vectors used in this neural network:
W0_matrix=W0.eval()
b0_array=b0.eval()
W1_matrix=W1.eval()
b1_array=b1.eval()
print ("W0 is matrix of size: %s " % (W0_matrix.shape,) )
print ("b0 is array of size: %s " % (b0_array.shape,) )
print ("W1 is matrix of size: %s " % (W1_matrix.shape,) )
print ("b1 is array of size: %s " % (b1_array.shape,) )
Explanation: Learning exercise: extract the tensors as 'manually' run the neural network scoring
In the following you can find an example of how to manually run the neural network scoring in Python using numpy. This is intended as an example to further the understanding of how the scoring engine works and opens the way for the next steps, that is the implementation of the scoring engine for Oracle using PL/SQL (see also the notebook MNIST_oracle_plsql.ipynb)
End of explanation
testlabels=tf.argmax(mnist.test.labels,1).eval()
testimages=mnist.test.images
print ("testimages is matrix of size: %s " % (testimages.shape,) )
print ("testlabels is array of size: %s " % (testlabels.shape,) )
Explanation: Extracting the test images and labels as numpy arrays
End of explanation
import numpy as np
def softmax(x):
Compute the softmax function on a numpy array
return np.exp(x) / np.sum(np.exp(x), axis=0)
def sigmoid(x):
Compute the sigmoid function on a numpy array
return (1 / (1 + np.exp(-x)))
testimage=testimages[0]
testlabel=testlabels[0]
hidden_layer = sigmoid(np.dot(testimage, W0_matrix) + b0_array)
predicted = np.argmax(softmax(np.dot(hidden_layer, W1_matrix) + b1_array))
print ("image label %d, predicted value by the neural network: %d" % (testlabel, predicted))
Explanation: Example of how to run the neural network "manually" using the tensor values extracted into numpy arrays
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(testimage.reshape(28,28), cmap='Greys')
Explanation: Visual test that the predicted value is indeed correct
End of explanation
import cx_Oracle
ora_conn = cx_Oracle.connect('mnist/mnist@dbserver:1521/orcl.cern.ch')
cursor = ora_conn.cursor()
Explanation: Transfer of the tensors and test data into Oracle tables
For the following you should have access to a (test) Oracle database. This procedure has been tested with Oracle 11.2.0.4 and 12.1.0.2 on Linux.
To keep the test isolated you can create a dedicated user (suggested name, mnist) for the data transfer, as follows:
<code>
From a DBA account (for example the user system) execute:
SQL> create user mnist identified by mnist default tablespace users quota unlimited on users;
SQL> grant connect, create table, create procedure to mnist;
SQL> grant read, write on directory DATA_PUMP_DIR to mnist;
</code>
These are the tables that will be used in the following code to transfer the tensors and testdata:
<code>
SQL> connect mnist/mnist@ORCL
SQL> create table tensors(name varchar2(20), val_id number, val binary_float, primary key(name, val_id));
SQL> create table testdata(image_id number, label number, val_id number, val binary_float, primary key(image_id, val_id));
</code>
Open the connection to the database using cx_Oracle:
(for tips on how to install and use of cx_Oracle see also https://github.com/LucaCanali/Miscellaneous/tree/master/Oracle_Jupyter)
End of explanation
i = 0
sql="insert into tensors values ('W0', :val_id, :val)"
for column in W0_matrix:
array_values = []
for element in column:
array_values.append((i, float(element)))
i += 1
cursor.executemany(sql, array_values)
ora_conn.commit()
i = 0
sql="insert into tensors values ('W1', :val_id, :val)"
for column in W1_matrix:
array_values = []
for element in column:
array_values.append((i, float(element)))
i += 1
cursor.executemany(sql, array_values)
ora_conn.commit()
Explanation: Transfer the matrixes W0 and W1 into the table tensors (which must be precreated as described above)
End of explanation
i = 0
sql="insert into tensors values ('b0', :val_id, :val)"
array_values = []
for element in b0_array:
array_values.append((i, float(element)))
i += 1
cursor.executemany(sql, array_values)
i = 0
sql="insert into tensors values ('b1', :val_id, :val)"
array_values = []
for element in b1_array:
array_values.append((i, float(element)))
i += 1
cursor.executemany(sql, array_values)
ora_conn.commit()
Explanation: Transfer the vectors b0 and b1 into the table "tensors" (the table is expected to exist on the DB, create it using the SQL described above)
End of explanation
image_id = 0
array_values = []
sql="insert into testdata values (:image_id, :label, :val_id, :val)"
for image in testimages:
val_id = 0
array_values = []
for element in image:
array_values.append((image_id, testlabels[image_id], val_id, float(element)))
val_id += 1
cursor.executemany(sql, array_values)
image_id += 1
ora_conn.commit()
Explanation: Transfer the test data with images and labels into the table "testdata" (the table is expected to exist on the DB, create it using the SQL described above)
End of explanation |
8,436 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Lab
Step4: 2. Explore the Baseball data
Step5: 3. Blend it all together | Python Code:
Since the data is unavailabe from data camp, let's create
some of our own
# Import numpy
import numpy as np
from numpy import random
from numpy import column_stack
# np_baseball is un-available, so let's generate some random distribution!
height = np.round( np.random.normal( 5.50, 5.0, 1015 ), 2 )
weight = np.round( np.random.normal( 70.50, 5.0, 1015 ), 2 )
age = np.round( np.random.normal( 31, 2, 1015 ), 2 )
# let's assign these values to np_baseball
np_baseball = np.column_stack( ( height, weight, age) )
# Create np_height from np_baseball
Since height coloumn is at index "0", and all the row
are to be included, hence ":"
np_height = np.array( np_baseball[ :, 0 ] )
# Print out the mean of np_height
print( np.mean( np_height ) )
# Print out the median of np_height
print( np.median( np_height ) )
Inference:
+ An average length of 5.36 inch, it sound right.
+ Further, by discriptive statistics,
the "median" is very less affected by the outlier in general.
However, there's no outlier it seems.
+ Hence 5.54 inch makes sense.
As a side note: Always check both the median and the mean.
Why? It gives an insight for the overall distributionn of the
entire dataset.
Explanation: Lab: Basic Statistics with Numpy
Objectives:
Experimenting with some func. Numpy offers out of the box.
Performing summary statistics to have a first look about our data.
Average Vs Median -- 100xp, status : earned
Explore the Baseball data -- 100xp, status: earned
Blend it all together -- 100xp, status : earned
1. Average Vs Median
Preface: The baseball data is available as a 2D Numpy array with 3 columns (height, weight, age) and 1015 rows. The name of this Numpy array is np_baseball.
After restructuring the data, however, you notice that some height values are abnormally high.
Instructions:
Create Numpy array np_height, that is equal to first column of np_baseball.
Print out the mean of np_height.
Print out the median of np_height.
End of explanation
# np_baseball is available
# Import numpy
import numpy as np
# Print mean height (first column)
avg = np.mean(np_baseball[:,0])
print("Average: " + str(avg))
# Print median height. Replace 'None'
med = np.median( np_baseball[:, 0])
print("\nMedian: " + str(med))
# Print out the standard deviation on height. Replace 'None'
stddev = np.std( np_baseball[:, 0])
print("\nStandard Deviation: " + str(stddev))
# Print out correlation between first and second column. Replace 'None'
corr = np.corrcoef( np_baseball[:,0], np_baseball[:,1])
print("\nCorrelation: " + str(corr))
Explanation: 2. Explore the Baseball data:
Preface:
Because the mean and median are so far apart, you decide to complain to the MLB. They find the error and send the corrected data over to you. It's again available as a 2D Numpy array np_baseball, with three columns.
Instructions:
The code to print out the mean height is already included. Complete the code for the median height. Replace None with the correct code.
Use np.std() on the first column of np_baseball to calculate stddev.
Replace None with the correct code.
Do big players tend to be heavier?
Use np.corrcoef() to store the correlation between the first and second column of np_baseball in corr.
Replace the None with teh correct code.
End of explanation
# heights and positions are un-available from data camp
# So let's create some of our own.
import numpy as np
from numpy import random
from numpy import column_stack
# let's represent numbers with "positions"
# 1:'GK', 2:'D', 3:'M', 4:'A'
positions = np.round( np.random.normal( 3, 10, 1000), 2)
heights = np.round( np.random.normal( 5.8, 4.0, 1000), 2) #in feets
# let's concatenate these into two coloumns
np_football = np.column_stack( (positions, heights) )
# Convert positions and heights to numpy arrays: np_positions, np_heights
np_positions = np.array( np_football[:, 0])
np_heights = np.array( np_football[:, 1])
# Heights of the goalkeepers: gk_heights
gk_heights = np_heights[ np_positions == 1 ]
# Heights of the other players: other_heights
other_heights = np_heights[ np_positions != 1 ]
# Print out the mean position of the np_football
print("\nMean positions at which players play: " + str( np.mean( np_positions ) ) )
# Print out the median positon of the np_football
print("\nMedian positions at which player play: " + str( np.median( np_positions ) ) )
# Print out the median height of goalkeepers. Replace 'None'
print("\nMedian height of goalkeepers: " + str( np.median( gk_heights ) ) )
# Print out the median height of other players. Replace 'None'
print("\nMedian height of other players: " + str( np.median( other_heights ) ) )
Explanation: 3. Blend it all together:
Preface:
You've contacted the FIFA for some data and they handed you two lists. The lists are the following:
position = [ 'GK', 'M', 'A', 'D', ... ]
height = [ 191, 184, 185, 180, ... ]
Each element in the lists corresponds to a player.
The first list, positions, contains strings representing each player's position.
The possible positions are: 'GK' (goalkeeper), 'M' (midfield), 'A' (attack) and 'D' (defense).
The second list, heights, contains integers representing the height of the player in cm.
The first player in the lists is a goalkeeper and is pretty tall (191 cm).
Presumption:
You're fairly confident that the median height of goalkeepers is higher than that of other players on the soccer field.
Some of your friends don't believe you, so you are determined to show them using the data you received from FIFA and your newly acquired Python skills.
Instructions:
Convert heights and positions, which are regular lists, to numpy arrays. Call them np_heights and np_positions.
Extract all the heights of the goalkeepers. You can use a little trick here:
np_positions == 'GK' as an index for np_heights. Assign the result to gk_heights.
Extract all the heights of the all the other players.
This time use np_positions != 'GK' as an index for np_heights.
Assign the result to other_heights.
Print out the median height of the goalkeepers using np.median().
Replace None with the correct code.
Do the same for the other players. Print out their median height. Replace None with the correct code.
End of explanation |
8,437 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using ZEMAX and PyZDDE with IPython/Jupyter notebook
<img src="https
Step1: Create PyZDDE object
Step2: Load an existing lens design file (Cooke 40 degree field) into Zemax's DDE server
Step3: The following figure is the 2-D Layout plot of the lens.
Step4: Few DDE dataitems that return information about the system have duplicate functions with the prefix ipz. They generally produce outputs that are suitable for interactive environments, and more suitable for human understanding. One example of such function pair is zGetFirst() and ipzGetFirst()
Step5: Analysis plots - ray-fan analysis plot as an example
To plot the ray-fan analysis graph using ipzCaptureWindow() function, we need to provide the 3-letter button code of the corresponding analysis function. If we don't remember the exact button code, we can use a helper function in pyzdde to get some help
Step6: Note that there are few other options for retrieving and plotting analysis plots from Zemax. They are discussed in a separate notebook. However, here is one worth quickly mentioning. You can ask ipzCaptureWindow() to just return the image pixel array instead of directly plotting. Then you can use matplotlib (or any other plotting libraries) to plotting. Here is an example
Step7: Now, lets say that we want to find the angular magnification of the above optics and we want to find if Zemax provides any operand whose value we can directly read. For that we can use another module level helper function to find all operands related to angular magnification
Step8: Bingo! AMAG is the operand we want. Now we can use
Step9: Of course, there is a function in PyZDDE called zGetPupilMagnification() that we can use to get the angular magnification since the inverse of the pupil magnificaiton is the angular magnification (as a consequence of the Lagrange Optical Invariant).
Step10: Connecting to another Zemax session simultaneously
Now, a second pyzdde object is created to communicate with a second ZEMAX server. Note that the first object is still present.
Step11: Set up lens surfaces in the second ZEMAX DDE server. Towards the end, zPushLens() is called so that the LDE is updated with the just-made lens.
Step12: Layout plot of the second lens
Step13: Spot diagram of the second lens
Step14: Just to demonstrate that the first lens (in the first ZEMAX server) is still available, the Layout plot is rendered again.
Step15: Spot diagram of the first lens
Step16: MTF plot of the first lens
Step17: Executing ZPL macro
Lastly, here is an example of how execute a ZPL macro using the PyZDDE.
Since ZEMAX can execute ZPL macros present in a set folder (generally the default macro folder in the data folder), the appropriate macro folder path needs to be set if it is not the default macro folder path.
Step18: The following command executes the ZPL macro 'GLOBAL' provided by ZEMAX. The macro computes the global vertex coordinates or orientations surface by surface by surface, and outputs a text window within the ZEMAX environment. Maximize (if required) the ZEMAX application window to see the output after executing the following command.
Step19: Close the DDE links | Python Code:
# imports
from __future__ import division
import os
import matplotlib.pyplot as plt
import pyzdde.zdde as pyz
%matplotlib inline
Explanation: Using ZEMAX and PyZDDE with IPython/Jupyter notebook
<img src="https://raw.githubusercontent.com/indranilsinharoy/PyZDDE/master/Doc/Images/articleBanner_00_usingZemax.png" height="230">
Please feel free to e-mail any corrections, comments and suggestions to the author (Indranil Sinharoy)
Last updated: 12/27/2015
License: Creative Commons Attribution 4.0 International
Summary:
This notebook demonstrates how to use Zemax with PyZDDE within a Jupyter notebook. It shows how to create a PyZDDE object for communicating with Zemax. It shows some of the PyZDDE functions that are especially designed for using in the notebook environment (functions with prefix "ipz" instead of just "z"). It also shows how to create two simultaneous communication links to two instances of Zemax.
Using the IPython notebook with ZEMAX is attractive for the following reasons:
The notebook provides an interactive, exploratory computational environment similar to Mathematica's CDF where we can combine text, figures, computation, etc. within a single document. Personally, it really helps me to tinker and explore various stages of designs and optical simulations.
It can provide a way for quick documentation of a lens design process as one progress through the design, including embedding intermittent figures, prescription files, thought process, etc.
It can also be used in an educational setting, such as to create and distribute lectures, notes, etc.
Before using the notebook for communicating with ZEMAX, ensure that ZEMAX is running. Once verified, a pyzdde object can be created and initialized as shown below.
End of explanation
l1 = pyz.createLink() # create a DDE link object for communication
Explanation: Create PyZDDE object
End of explanation
zfile = os.path.join(l1.zGetPath()[1], 'Sequential', 'Objectives', 'Cooke 40 degree field.zmx')
l1.zLoadFile(zfile)
Explanation: Load an existing lens design file (Cooke 40 degree field) into Zemax's DDE server
End of explanation
# Surfaces in the sequential lens data editor
l1.ipzGetLDE()
# note that ipzCaptureWindow() doesn't work in the new OpticStudio because
# the dataitem 'GetMetafile' has become obsolete
l1.ipzCaptureWindow('Lay')
# General System properties
l1.zGetSystem()
Explanation: The following figure is the 2-D Layout plot of the lens.
End of explanation
# Paraxial/ first order properties of the system
l1.zGetFirst()
# duplicate of zGetFirst() for use in the notebook
l1.ipzGetFirst()
# ... another example is the zGetSystemAper() that returns information about the aperture.
# The aperture type is retuned as a code which we might not remember always ...
l1.zGetSystemAper()
# ...with the duplicate, ipzGetSystemAper(), we can immediately know that
# the aperture type is the Entrance Pupil Diameter (EPD)
l1.ipzGetSystemAper()
# information about the field definition
l1.ipzGetFieldData()
Explanation: Few DDE dataitems that return information about the system have duplicate functions with the prefix ipz. They generally produce outputs that are suitable for interactive environments, and more suitable for human understanding. One example of such function pair is zGetFirst() and ipzGetFirst():
End of explanation
pyz.findZButtonCode('ray')
l1.ipzCaptureWindow('Ray', gamma=0.4)
Explanation: Analysis plots - ray-fan analysis plot as an example
To plot the ray-fan analysis graph using ipzCaptureWindow() function, we need to provide the 3-letter button code of the corresponding analysis function. If we don't remember the exact button code, we can use a helper function in pyzdde to get some help:
End of explanation
lay_arr = l1.ipzCaptureWindow('Lay', percent=15, gamma=0.08, retArr=True)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
pyz.imshow(lay_arr, cropBorderPixels=(5, 5, 1, 90), fig=fig, faxes=ax)
ax.set_title('Layout plot', fontsize=16)
# Annotate Lens numbers
ax.text(41, 70, "L1", fontsize=12)
ax.text(98, 105, "L2", fontsize=12)
ax.text(149, 89, "L3", fontsize=12)
# Annotate the lens with radius of curvature information
col = (0.08,0.08,0.08)
s1_r = 1.0/l1.zGetSurfaceData(1,2)
ax.annotate("{:0.2f}".format(s1_r), (37, 232), (8, 265), fontsize=12,
arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5)))
s2_r = 1.0/l1.zGetSurfaceData(2,2)
ax.annotate("{:0.2f}".format(s2_r), (47, 232), (50, 265), fontsize=12,
arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5)))
s6_r = 1.0/l1.zGetSurfaceData(6,2)
ax.annotate("{:0.2f}".format(s6_r), (156, 218), (160, 251), fontsize=12,
arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5)))
ax.text(5, 310, "Cooke Triplet, EFL = {} mm, F# = {}, Total track length = {} mm"
.format(50, 5, 60.177), fontsize=14)
plt.show()
Explanation: Note that there are few other options for retrieving and plotting analysis plots from Zemax. They are discussed in a separate notebook. However, here is one worth quickly mentioning. You can ask ipzCaptureWindow() to just return the image pixel array instead of directly plotting. Then you can use matplotlib (or any other plotting libraries) to plotting. Here is an example:
End of explanation
pyz.findZOperand('angular magnification')
Explanation: Now, lets say that we want to find the angular magnification of the above optics and we want to find if Zemax provides any operand whose value we can directly read. For that we can use another module level helper function to find all operands related to angular magnification:
End of explanation
l1.zOperandValue('AMAG', 1) # the argument "1" is for the wavelength
Explanation: Bingo! AMAG is the operand we want. Now we can use
End of explanation
1.0/l1.zGetPupilMagnification()
Explanation: Of course, there is a function in PyZDDE called zGetPupilMagnification() that we can use to get the angular magnification since the inverse of the pupil magnificaiton is the angular magnification (as a consequence of the Lagrange Optical Invariant).
End of explanation
l2 = pyz.createLink() # create a second DDE communication link object
Explanation: Connecting to another Zemax session simultaneously
Now, a second pyzdde object is created to communicate with a second ZEMAX server. Note that the first object is still present.
End of explanation
# Erase all lens data in the LDE (good practice)
l2.zNewLens()
# Wavelength data
wavelengths = (0.48613270, 0.58756180, 0.65627250) #mm
weights = (1.0, 1.0, 1.0)
l2.zSetWaveTuple((wavelengths, weights))
l2.zSetPrimaryWave(2) # Set 0.58756180 as primary
# System aperture data, and global reference surface.
aType, stopSurf, appValue = 0, 1, 100 # EPD,STO is 1st sur, value = 100
l2.zSetSystemAper(aType, stopSurf, appValue)
# General data (we need set whatever is really required ... the following
# is just shown as an example)
unitCode, rayAimingType, globalRefSurf = 0, 0, 1 # mm, off,ref=1st surf
useEnvData, temp, pressure = 0, 20, 1 # off, 20C, 1ATM
setSystemArg = (unitCode, stopSurf, rayAimingType,
useEnvData, temp, pressure, globalRefSurf)
l2.zSetSystem(*setSystemArg)
# Setup Field data
l2.zSetField(0, 0, 3, 1) # number of fields = 3
l2.zSetField(1, 0, 0) # 1st field, on-axis x, on-axis y, weight = 1 (default)
l2.zSetField(3, 0, 10, 1.0, 0.0, 0.0, 0.0) # 2nd field
l2.zSetField(2,0,5,2.0,0.5,0.5,0.5,0.5, 0.5) # 3rd field
#Setup the system, wavelength, (but not the field points)
l2.zInsertSurface(2)
l2.zInsertSurface(3)
#Set surface data, note that by default, all surfaces are Standard type
# OBJ: Surface 0
l2.zSetSurfaceData(0,3,500.00) #OBJ thickness = 0.5 m or 500 mm
#STO: Surface 1
l2.zSetSurfaceData(1,2,0) #STO Radius = Infinity
l2.zSetSurfaceData(1,3,20.00) #STO Thickness = 20 mm
l2.zSetSurfaceData(1,5,50.00) #STO Semi-diameter = 50 mm
#Surface 2
l2.zSetSurfaceData(2,2,1/150) #Surf2 Radius = 150 mm
l2.zSetSurfaceData(2,3,100.0) #Surf2 Thickness = 100 mm
l2.zSetSurfaceData(2,4,'BK7') #Surf2 Glass, type = BK7
l2.zSetSurfaceData(2,5,65.00) #Surf2 Semi-diameter = 65.00 mm
#Surface 3
l2.zSetSurfaceData(3,2,-1/600) #Surf3 Radius = -600 mm
l2.zSetSurfaceData(3,3,300.00) #Surf3 Thickness = 184 mm
l2.zSetSurfaceData(3,5,65.00) #Surf3 Semi-diameter = 65.00 mm
# Perform Quick Focus
l2.zQuickFocus(3,1)
# push lens
l2.zPushLens(update=1)
Explanation: Set up lens surfaces in the second ZEMAX DDE server. Towards the end, zPushLens() is called so that the LDE is updated with the just-made lens.
End of explanation
l2.ipzCaptureWindow('L3d')
Explanation: Layout plot of the second lens
End of explanation
l2.ipzCaptureWindow('Spt', percent=15, gamma=0.55)
Explanation: Spot diagram of the second lens
End of explanation
l1.ipzCaptureWindow('Lay')
Explanation: Just to demonstrate that the first lens (in the first ZEMAX server) is still available, the Layout plot is rendered again.
End of explanation
l1.ipzCaptureWindow('Spt', percent=15, gamma=0.55)
Explanation: Spot diagram of the first lens
End of explanation
l1.ipzCaptureWindow('Mtf', percent=15, gamma=0.5)
Explanation: MTF plot of the first lens
End of explanation
l1.zSetMacroPath(r"C:\PROGRAMSANDEXPERIMENTS\ZEMAX\Macros")
Explanation: Executing ZPL macro
Lastly, here is an example of how execute a ZPL macro using the PyZDDE.
Since ZEMAX can execute ZPL macros present in a set folder (generally the default macro folder in the data folder), the appropriate macro folder path needs to be set if it is not the default macro folder path.
End of explanation
l1.zExecuteZPLMacro('GLO')
Explanation: The following command executes the ZPL macro 'GLOBAL' provided by ZEMAX. The macro computes the global vertex coordinates or orientations surface by surface by surface, and outputs a text window within the ZEMAX environment. Maximize (if required) the ZEMAX application window to see the output after executing the following command.
End of explanation
pyz.closeLink() # Also, l1.close(); l2.close()
Explanation: Close the DDE links
End of explanation |
8,438 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Investigating the character of the Theis well function
Introduction
In the previous section the Theis well function was introduced. The function, which is in fact the function known as exponential integral by mathematicians, proved available in the standard library of Ptyhon module scipy.special. We modified it a little to make it match the Theis well function exactly and gave it the name "W" like it has in groundwater hydrology books. Then we used it in some examples.
In this chapter we will investigate the Theis well funchtion character a more accurately.
Instead of looking for the function in the available library we could have computed the function ourselfs, for instance by numerical integration.
$$ W(u) = \intop_u^{-\infty} \frac {e^{-y}} y dy \approx \sum_0^N \frac {e^{-y_i}} {y_i} \Delta y_i $$
where $y_0 = u_0$ and $N$ has a a sufficiently large value.
Step2: Try it out
Step4: Is seems that our numerical integration is a fair approximation to four significant digits, but not better, even when computed with 1000 steps as we did. So it is relatively easy to create one's own numerically computed value of an analytical expression like the exponential integral
Theis well function as a power series
The theis well function can be expressed also as a power series. This expression has certain advanages as it gives insight into the behavior of its character and allows important simplifications and deductions.
$$ W(u) = -0.5773 - \ln(u) + u - \frac {u^2} {2 . 2!} + \frac {u^3} {3 . 3!} - \frac {u^4} {4 . 4!} + ... $$
This series too can be readily numerially comptuted by first defining a function for it. The sum will be computed in a loop. To prevent having to compute faculties, it is easiest to compute each successive term from the previous one.
So to get from term m to term n+1
Step5: Compare the three methods of computing the well function.
Step6: We see that all three methods yiedld the same results.
Next we show the well function as it shown in groundwater hydrology books.
Step7: The curve W(u) versus u runs counter intuitively which and is, therefore, confusing. Therefore, it generally presented as W(u) versus 1/u instead as shown below
Step8: Now W(u) resembles the actual drawdown, which increases with time.
The reason that this is so, becomes clear from the fact that
$$ u = \frac {r^2 S} {4 kD t} $$
and that
$$ \frac 1 u = \frac {4 kDt} {r^2 S} = \frac {4 kD} S \frac t {r^2} $$
which shows that $\frac 1 u$ increases with time, so that the values of $\frac 1 u$ on the $\frac 1 u$ axis are propotional with time and so the drawdown, i.e., the well function $W(u)$ increases with $\frac 1 u$, which is less confusing.
The graph of $W(u)$ versus $\frac 1 u$ is called the Theis type curve. It's vertical axis is proportional to the drawdown and its horizontal axis proportional to time.
The same curve is shown below but now on linear vertical scale and a logarithmic horizontal scale. The vertical scale was reversed (see values on y-axis) to obtain a curve that illustrates the decline of groundwater head with time caused by the extraction. This way of presending is probably least confusing when reading the curve.
Step9: Logarithmic approximaion of the Theis type curve
We see that after some time, the drawdown is linear when only the time-axis is logarithmic. This suggests that a logarithmic approximation of time-drawdown curve is accurate after some time.
That this is indeed the case can be deduced from the power series description of the type curve
Step10: Hence, in any practical situation, the logarithmic approximation is accurate enough when $u<0.01$.
The approximatin of the Theis type curve can no be elaborated
Step11: This shows that the radius of influence is limited. We can now approximate this radius of influence by saying that the radius is where the appoximated Theis curve, that is the straight red line in the graph intersects the zero drawdown, i.e. $W(u) = 0$.
Hence, for the radius of influence, R, we have
$$ \ln \frac {2.25 kD t} {R^2 S} = 0 $$
impying that
$$ \frac {2.25 kD t } { R^2 S } = 1 $$
$$ R =\sqrt { \frac {2.25 kD t} D} $$
with R the radius of influence. Computing the radius of influence is an easy way to determine how far out the drawdown affects the groundwater heads.
Pumping test
Introduction
Below are the data given that were obtained from a pumping test carried out on the site "Oude Korendijk" south of Rotterdam in the Netherlands (See Kruseman and De Ridder, p56, 59). The piezometers are all open at 20 m below ground surface. The groundwater head is shallow, within a m from ground surface. The first18 m below ground surface consist of clay,peat and clayey fine sand. These layers form a practially impermeable confining unit. Below this, between 18 and25 m below ground surface are 7 m of sand an some gravel, that form the aquifer. Fine sandy and clayey sediments thereunder from the base of the aquifer, which is considered impermeable.
Piezometers wer installed at 30, 90 and 215 m from the well, open at 20 m below ground surface. The well has its screen installed over the whole thickness of the aquifer. We consider the aquifer as confined with no leakage. But we should look with a critical eye that the drawdown curves to verify to what extent this assumption holds true.
The drawdown data for the three piezometers is given below. The first column is time after the start of the pump in minutes; the second column is the drawdown in m.
The well extracts 788 m3/d
The objective of the pumping test is to determine the properties kD and S of the aquifer.
The data | Python Code:
import scipy.special as sp
import numpy as np
from scipy.special import expi
def W(u): return -expi(-u)
def W1(u):
Returns Theis' well function axpproximation by numerical intergration
Works only for scalar u
if not np.isscalar(u):
raise ValueError("","u must be a scalar")
LOG10INF = 2 # sufficient as exp(-100) is in the order of 1e-50
y = np.logspace(np.log10(u), LOG10INF, 1000) # we use thousand intermediate values
ym = 0.5 * (y[:-1] + y[1:])
Dy = np.diff(y)
w = np.sum( np.exp(-ym) / ym * Dy )
return w
Explanation: Investigating the character of the Theis well function
Introduction
In the previous section the Theis well function was introduced. The function, which is in fact the function known as exponential integral by mathematicians, proved available in the standard library of Ptyhon module scipy.special. We modified it a little to make it match the Theis well function exactly and gave it the name "W" like it has in groundwater hydrology books. Then we used it in some examples.
In this chapter we will investigate the Theis well funchtion character a more accurately.
Instead of looking for the function in the available library we could have computed the function ourselfs, for instance by numerical integration.
$$ W(u) = \intop_u^{-\infty} \frac {e^{-y}} y dy \approx \sum_0^N \frac {e^{-y_i}} {y_i} \Delta y_i $$
where $y_0 = u_0$ and $N$ has a a sufficiently large value.
End of explanation
U = 4 * 10** -np.arange(11.) # generates values 4, 4e-1, 4e-2 .. 4e-10
print("{:>10s} {:>10s} {:>10s}".format('u ', 'W(u)','W1(u) '))
for u in U:
print("{0:10.1e} {1:10.4e} {2:10.4e}".format(u, W(u), W1(u)))
Explanation: Try it out
End of explanation
def W2(u):
Returns Theis well function computed as a power series
tol = 1e-5
w = -0.5772 -np.log(u) + u
a = u
for n in range(1, 100):
a = -a * u * n / (n+1)**2 # new term (next term)
w += a
if np.all(a) < tol:
return w
Explanation: Is seems that our numerical integration is a fair approximation to four significant digits, but not better, even when computed with 1000 steps as we did. So it is relatively easy to create one's own numerically computed value of an analytical expression like the exponential integral
Theis well function as a power series
The theis well function can be expressed also as a power series. This expression has certain advanages as it gives insight into the behavior of its character and allows important simplifications and deductions.
$$ W(u) = -0.5773 - \ln(u) + u - \frac {u^2} {2 . 2!} + \frac {u^3} {3 . 3!} - \frac {u^4} {4 . 4!} + ... $$
This series too can be readily numerially comptuted by first defining a function for it. The sum will be computed in a loop. To prevent having to compute faculties, it is easiest to compute each successive term from the previous one.
So to get from term m to term n+1:
$$ \frac {u^{n+1}} {(n+1) . (n+1)!} = \frac {u^n} { n . n!} \times \frac {u \, n} {(n+1)^2} $$
This series is implemented below.
End of explanation
U = 4.0 * 10** -np.arange(11.) # generates values 4, 4e-1, 4e-2 .. 4e-10
print("{:>10s} {:>10s} {:>10s} {:>10s}".format('u ', 'W(u) ','W1(u) ', 'W2(u) '))
for u in U:
print("{0:10.1e} {1:10.4e} {2:10.4e} {2:10.4e}".format(u, W(u), W1(u), W2(u)))
Explanation: Compare the three methods of computing the well function.
End of explanation
u = np.logspace(-7, 1, 71)
import matplotlib.pylab as plt
fig1= plt.figure()
ax1 = fig1.add_subplot(111)
ax1.set(xlabel='1/u', ylabel='W(u)', title='Theis type curve versus u', yscale='log', xscale='log')
ax1.grid(True)
ax1.plot(u, W(u), 'b', label='-expi(-u)')
#ax1.plot(u, W1(u), 'rx', label='integal') # works only for scalars
ax1.plot(u, W2(u), 'g+', label='power series')
ax1.legend(loc='best')
plt.show()
Explanation: We see that all three methods yiedld the same results.
Next we show the well function as it shown in groundwater hydrology books.
End of explanation
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
ax2.set(xlabel='1/u', ylabel='W(u)', title='Theis type curve versus 1/u', yscale='log', xscale='log')
ax2.grid(True)
ax2.plot(1/u, W(u))
plt.show()
Explanation: The curve W(u) versus u runs counter intuitively which and is, therefore, confusing. Therefore, it generally presented as W(u) versus 1/u instead as shown below
End of explanation
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
ax2.set(xlabel='1/u', ylabel='W(u)', title='Theis type curve versus 1/u', yscale='linear', xscale='log')
ax2.grid(True)
ax2.plot(1/u, W(u))
ax2.invert_yaxis()
plt.show()
Explanation: Now W(u) resembles the actual drawdown, which increases with time.
The reason that this is so, becomes clear from the fact that
$$ u = \frac {r^2 S} {4 kD t} $$
and that
$$ \frac 1 u = \frac {4 kDt} {r^2 S} = \frac {4 kD} S \frac t {r^2} $$
which shows that $\frac 1 u$ increases with time, so that the values of $\frac 1 u$ on the $\frac 1 u$ axis are propotional with time and so the drawdown, i.e., the well function $W(u)$ increases with $\frac 1 u$, which is less confusing.
The graph of $W(u)$ versus $\frac 1 u$ is called the Theis type curve. It's vertical axis is proportional to the drawdown and its horizontal axis proportional to time.
The same curve is shown below but now on linear vertical scale and a logarithmic horizontal scale. The vertical scale was reversed (see values on y-axis) to obtain a curve that illustrates the decline of groundwater head with time caused by the extraction. This way of presending is probably least confusing when reading the curve.
End of explanation
U = np.logspace(-2, 0, 21)
Wa = lambda u : -0.5772 - np.log(u)
print("{:>12s} {:>12s} {:>12s} {:>12s}".format('u','W(u)','Wa(u)','1-Wa(u)/W(u)'))
print("{:>12s} {:>12s} {:>12s} {:>12s}".format(' ',' ',' ','the error'))
for u in U:
print("{:12.3g} {:12.3g} {:12.3g} {:12.1%}".format(u, W(u), Wa(u), 1-Wa(u)/W(u)))
U = np.logspace(-7, 1, 81)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set(xlabel='1/u', ylabel='W(u)', title='Theis type curve and its logarithmic approximation', yscale='linear', xscale='log')
ax.grid(True)
ax.plot(1/U, W(U), 'b', linewidth = 2., label='Theis type curve')
ax.plot(1/U, Wa(U), 'r', linewidth = 0.25, label='log approximation')
ax.invert_yaxis()
plt.legend(loc='best')
plt.show()
Explanation: Logarithmic approximaion of the Theis type curve
We see that after some time, the drawdown is linear when only the time-axis is logarithmic. This suggests that a logarithmic approximation of time-drawdown curve is accurate after some time.
That this is indeed the case can be deduced from the power series description of the type curve:
$$ W(u) = -0.5773 - \ln(u) + u - \frac {u^2} {2 . 2!} + \frac {u^3} {3 . 3!} - \frac {u^4} {4 . 4!} + ... $$
It is clear that all terms to the right of u will be smaller than u when $u<1$. Hence when u is so small that it can be neglected relative to $\ln(u)$, then all the terms to the right of $\ln(u)$ can be neglected. Therefore we have the following spproximation
$$ W(u) \approx -0.5772 -\ln(u) + O(u) $$
for
$$ -\ln(u)>>u \,\,\,\rightarrow \,\,\, \ln(u)<<-u \,\,\, \rightarrow \,\,\, u<<e^{-u} \, \approx \,1 $$
which is practically the case for $u<0.01$, as can also be seen in the graph for $1/u = 10^2 $. From the graph one may conclude that even for 1/u>10 or u<0.1, the logarithmic type curve is straight and therefore can be accurately computed using a logarithmic approximation of the type curve.
Below the error between the full Theis curve $W(u)$ and the approximation $Wu(u) = -0.5772 - \ln(u)$ are computed and shown. This reveals that at $u=0.01$ the error is 5.4% and at $u=0.001$ it has come down to only 0.2%.
End of explanation
U = np.logspace(-7, 1, 81)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set(xlabel='u', ylabel='W(u)', title='Theis type curve and its logarithmic approximation', yscale='linear', xscale='log')
ax.grid(True)
ax.plot(U, W(U), 'b', linewidth = 2., label='Theis type curve')
ax.plot(U, Wa(U), 'r', linewidth = 0.25, label='log approximation')
ax.invert_yaxis()
plt.legend(loc='best')
plt.show()
Explanation: Hence, in any practical situation, the logarithmic approximation is accurate enough when $u<0.01$.
The approximatin of the Theis type curve can no be elaborated:
$$ Wa (u) \approx -0.5772 - \ln(u) = \ln(e^{-0.5772}) - \ln(u) = \ln(0.5615) - \ln(u) = \ln \frac {0.5615} {u} $$
Because $u = \frac {r^2 S} {4 kD t}$ we have, with 4\times 0.5615 \approx 2.25
$$ W(u) \approx \ln \frac {2.25 kD t} {r^2 S} $$
and so the drawdown approximation becomes
$$ s \approx \frac Q {4 \pi kD} \ln \frac {2.25 kD t} {r^2 S} $$
The condition u<0.1 can be translated to $\frac {r^2 S} {4 kD t} < 0.1$ or
$$\frac t {r^2} > 2.5 \frac {S} {kD}$$
Radius of influence
The previous logarithmic drawdown type curve versus $1/u$ can be seen an image of the drawdown for a fixed distance and varying time. This is because $1/u$ is proportional to the real time. On the other hand, the drawdown type curve versus u may be regarded as the drawdown at a fixed time for varying distance. This follows from
s versus u is
$$ W(u)\approx \ln \frac {2.25 kD t} { r^2 S} \,\,\,\, versus\,\,\,\, u = \ln \frac {r^2 S} {4 kD t} = 2 \ln \left( \frac {S} {4 kD t} r\right) $$
That is, proportional r on log scale. The plot reveals this:
End of explanation
# t[min], s[m]
H30 = [ [0.0, 0.0],
[0.1, 0.04],
[0.25, 0.08],
[0.50, 0.13],
[0.70, 0.18],
[1.00, 0.23],
[1.40, 0.28],
[1.90, 0.33],
[2.33, 0.36],
[2.80, 0.39],
[3.36, 0.42],
[4.00, 0.45],
[5.35, 0.50],
[6.80, 0.54],
[8.30, 0.57],
[8.70, 0.58],
[10.0, 0.60],
[13.1, 0.64]]
# t[min], s[m]
H90= [[0.0, 0.0],
[1.5, 0.015],
[2.0, 0.021],
[2.16, 0.23],
[2.66, 0.044],
[3.00, 0.054],
[3.50, 0.075],
[4.00, 0.090],
[4.33, 0.104],
[5.50, 0.133],
[6.0, 0.154],
[7.5, 0.178],
[9.0, 0.206],
[13.0, 0.250],
[15.0, 0.275],
[18.0, 0.305],
[25.0, 0.348],
[30.0, 0.364]]
# t[min], s[m]
H215=[[0.0, 0.0],
[66.0, 0.089],
[127., 0.138],
[185., 0.165],
[251., 0.186]]
Explanation: This shows that the radius of influence is limited. We can now approximate this radius of influence by saying that the radius is where the appoximated Theis curve, that is the straight red line in the graph intersects the zero drawdown, i.e. $W(u) = 0$.
Hence, for the radius of influence, R, we have
$$ \ln \frac {2.25 kD t} {R^2 S} = 0 $$
impying that
$$ \frac {2.25 kD t } { R^2 S } = 1 $$
$$ R =\sqrt { \frac {2.25 kD t} D} $$
with R the radius of influence. Computing the radius of influence is an easy way to determine how far out the drawdown affects the groundwater heads.
Pumping test
Introduction
Below are the data given that were obtained from a pumping test carried out on the site "Oude Korendijk" south of Rotterdam in the Netherlands (See Kruseman and De Ridder, p56, 59). The piezometers are all open at 20 m below ground surface. The groundwater head is shallow, within a m from ground surface. The first18 m below ground surface consist of clay,peat and clayey fine sand. These layers form a practially impermeable confining unit. Below this, between 18 and25 m below ground surface are 7 m of sand an some gravel, that form the aquifer. Fine sandy and clayey sediments thereunder from the base of the aquifer, which is considered impermeable.
Piezometers wer installed at 30, 90 and 215 m from the well, open at 20 m below ground surface. The well has its screen installed over the whole thickness of the aquifer. We consider the aquifer as confined with no leakage. But we should look with a critical eye that the drawdown curves to verify to what extent this assumption holds true.
The drawdown data for the three piezometers is given below. The first column is time after the start of the pump in minutes; the second column is the drawdown in m.
The well extracts 788 m3/d
The objective of the pumping test is to determine the properties kD and S of the aquifer.
The data:
End of explanation |
8,439 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
디리클레 분포
디리클레 분포(Dirichlet distribution)는 베타 분포의 확장판이라고 할 수 있다. 베타 분포는 0과 1사이의 값을 가지는 단일(univariate) 확률 변수의 베이지안 모형에 사용되고 디리클레 분포는 0과 1사이의 사이의 값을 가지는 다변수(multivariate) 확률 변수의 베이지안 모형에 사용된다. 다만 디리클레 분포틑 다변수 확률 변수들의 합이 1이되어야 한다는 제한 조건을 가진다.
즉 $K=3$인 디리클레 분포를 따르는 확률 변수는 다음과 같은 값들을 샘플로 가질 수 있다.
$$(1, 0, 0)$$
$$(0.5, 0.5, 0)$$
$$(0.2, 0.3, 0.5)$$
디리클레 분포의 확률 밀도 함수는 다음과 같다.
$$ f(x_1, x_2, \cdots, x_K) = \frac{1}{\mathrm{B}(\boldsymbol\alpha)} \prod_{i=1}^K x_i^{\alpha_i - 1} $$
여기에서
$$ \mathrm{B}(\boldsymbol\alpha) = \frac{\prod_{i=1}^K \Gamma(\alpha_i)} {\Gamma\bigl(\sum_{i=1}^K \alpha_i\bigr)} $$
이고 다음과 같은 제한 조건이 있다.
$$ \sum_{i=1}^{K} x_i = 1 $$
이 식에서 $\boldsymbol\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_K)$는 디리클레 분포의 모수 벡터이다.
베타 분포와 디리클레 분포의 관계
베타 분포는 $K=2$ 인 디리클레 분포라고 볼 수 있다.
즉 $x_1 = x$, $x_2 = 1 - x$, $\alpha_1 = a$, $\alpha_2 = b$ 로 하면
$$
\begin{eqnarray}
\text{Beta}(x;a,b)
&=& \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\, x^{a-1}(1-x)^{b-1} \
&=& \frac{\Gamma(\alpha_1+\alpha_2)}{\Gamma(\alpha_1)\Gamma(\alpha_2)}\, x_1^{\alpha_1 - 1} x_2^{\alpha_2 - 1} \
&=& \frac{1}{\mathrm{B}(\alpha_1, \alpha_2)} \prod_{i=1}^2 x_i^{\alpha_i - 1}
\end{eqnarray}
$$
디리클레 분포의 모멘트 특성
디리클레 분포의 기댓값, 모드, 분산은 다음과 같다.
기댓값
$$E[x_k] = \dfrac{\alpha_k}{\alpha}$$
여기에서
$$\alpha=\sum\alpha_k$$
모드
$$ \dfrac{\alpha_k - 1}{\alpha - K}$$
분산
$$\text{Var}[x_k] =\dfrac{\alpha_k(\alpha - \alpha_k)}{\alpha^2(\alpha + 1)}$$
기댓값 공식을 보면 모수인 $\boldsymbol\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_K)$는 $(x_1, x_2, \ldots, x_K$ 중 어느 수가 더 크게 나올 가능성이 높은지를 결정하는 형상 인자(shape factor)임을 알 수 있다. 모든 $\alpha_i$값이 동일하면 모든 $x_i$의 분포가 같아진다.
또한 분산 공식을 보면 $\boldsymbol\alpha$의 절대값이 클수록 분산이 작아진다. 즉, 어떤 특정한 값이 나올 가능성이 높아진다.
디리클레 분포의 응용
다음과 같은 문제를 보자 이 문제는 $K=3$이고 $ \alpha_1 = \alpha_2 = \alpha_3$ 인 Dirichlet 분포의 특수한 경우이다.
<img src="https
Step1: 다음 함수는 생성된 점들을 2차원 삼각형 위에서 볼 수 있도록 그려주는 함수이다.
Step2: 만약 이 문제를 단순하게 생각하여 서로 독립인 0과 1사이의 유니폼 확률 변수를 3개 생성하고 이들의 합이 1이 되도록 크기를 정규화(normalize)하면 다음 그림과 같이 삼각형의 중앙 근처에 많은 확률 분포가 집중된다. 즉, 확률 변수가 골고루 분포되지 않는다.
Step3: 그러나 $\alpha=(1,1,1)$인 디리클레 분포는 다음과 같이 골고루 샘플을 생성한다.
Step4: $\alpha$가 $(1,1,1)$이 아닌 경우에는 다음과 같이 특정 위치에 분포가 집중되도록 할 수 있다. 이 특성을 이용하면 다항 분포의 모수를 추정하는 베이지안 추정 문제에 응용할 수 있다. | Python Code:
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
fig = plt.figure()
ax = Axes3D(fig)
x = [1,0,0]
y = [0,1,0]
z = [0,0,1]
verts = [zip(x, y,z)]
ax.add_collection3d(Poly3DCollection(verts, edgecolor="k", lw=5, alpha=0.4))
ax.text(1, 0, 0, "(1,0,0)", position=(0.7,0.1))
ax.text(0, 1, 0, "(0,1,0)", position=(0,1.04))
ax.text(0, 0, 1, "(0,0,1)", position=(-0.2,0))
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
ax.view_init(30, -20)
tmp_planes = ax.zaxis._PLANES
# set origin ( http://stackoverflow.com/questions/15042129/changing-position-of-vertical-z-axis-of-3d-plot-matplotlib )
ax.yaxis._PLANES = (tmp_planes[2], tmp_planes[3],
tmp_planes[0], tmp_planes[1],
tmp_planes[4], tmp_planes[5])
ax.zaxis._PLANES = (tmp_planes[2], tmp_planes[3],
tmp_planes[0], tmp_planes[1],
tmp_planes[4], tmp_planes[5])
plt.show()
Explanation: 디리클레 분포
디리클레 분포(Dirichlet distribution)는 베타 분포의 확장판이라고 할 수 있다. 베타 분포는 0과 1사이의 값을 가지는 단일(univariate) 확률 변수의 베이지안 모형에 사용되고 디리클레 분포는 0과 1사이의 사이의 값을 가지는 다변수(multivariate) 확률 변수의 베이지안 모형에 사용된다. 다만 디리클레 분포틑 다변수 확률 변수들의 합이 1이되어야 한다는 제한 조건을 가진다.
즉 $K=3$인 디리클레 분포를 따르는 확률 변수는 다음과 같은 값들을 샘플로 가질 수 있다.
$$(1, 0, 0)$$
$$(0.5, 0.5, 0)$$
$$(0.2, 0.3, 0.5)$$
디리클레 분포의 확률 밀도 함수는 다음과 같다.
$$ f(x_1, x_2, \cdots, x_K) = \frac{1}{\mathrm{B}(\boldsymbol\alpha)} \prod_{i=1}^K x_i^{\alpha_i - 1} $$
여기에서
$$ \mathrm{B}(\boldsymbol\alpha) = \frac{\prod_{i=1}^K \Gamma(\alpha_i)} {\Gamma\bigl(\sum_{i=1}^K \alpha_i\bigr)} $$
이고 다음과 같은 제한 조건이 있다.
$$ \sum_{i=1}^{K} x_i = 1 $$
이 식에서 $\boldsymbol\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_K)$는 디리클레 분포의 모수 벡터이다.
베타 분포와 디리클레 분포의 관계
베타 분포는 $K=2$ 인 디리클레 분포라고 볼 수 있다.
즉 $x_1 = x$, $x_2 = 1 - x$, $\alpha_1 = a$, $\alpha_2 = b$ 로 하면
$$
\begin{eqnarray}
\text{Beta}(x;a,b)
&=& \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\, x^{a-1}(1-x)^{b-1} \
&=& \frac{\Gamma(\alpha_1+\alpha_2)}{\Gamma(\alpha_1)\Gamma(\alpha_2)}\, x_1^{\alpha_1 - 1} x_2^{\alpha_2 - 1} \
&=& \frac{1}{\mathrm{B}(\alpha_1, \alpha_2)} \prod_{i=1}^2 x_i^{\alpha_i - 1}
\end{eqnarray}
$$
디리클레 분포의 모멘트 특성
디리클레 분포의 기댓값, 모드, 분산은 다음과 같다.
기댓값
$$E[x_k] = \dfrac{\alpha_k}{\alpha}$$
여기에서
$$\alpha=\sum\alpha_k$$
모드
$$ \dfrac{\alpha_k - 1}{\alpha - K}$$
분산
$$\text{Var}[x_k] =\dfrac{\alpha_k(\alpha - \alpha_k)}{\alpha^2(\alpha + 1)}$$
기댓값 공식을 보면 모수인 $\boldsymbol\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_K)$는 $(x_1, x_2, \ldots, x_K$ 중 어느 수가 더 크게 나올 가능성이 높은지를 결정하는 형상 인자(shape factor)임을 알 수 있다. 모든 $\alpha_i$값이 동일하면 모든 $x_i$의 분포가 같아진다.
또한 분산 공식을 보면 $\boldsymbol\alpha$의 절대값이 클수록 분산이 작아진다. 즉, 어떤 특정한 값이 나올 가능성이 높아진다.
디리클레 분포의 응용
다음과 같은 문제를 보자 이 문제는 $K=3$이고 $ \alpha_1 = \alpha_2 = \alpha_3$ 인 Dirichlet 분포의 특수한 경우이다.
<img src="https://datascienceschool.net/upfiles/d0acaf490aaa41389b975e20c58ac1ee.png" style="width:90%; margin: 0 auto 0 auto;">
3차원 디리클레 문제는 다음 그림과 같이 3차원 공간 상에서 (1,0,0), (0,1,0), (0,0,1) 세 점을 연결하는 정삼각형 면위의 점을 생성하는 문제라고 볼 수 있다.
End of explanation
def plot_triangle(X, kind):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
X1 = (X-n12).dot(m1)
X2 = (X-n12).dot(m2)
g = sns.jointplot(X1, X2, kind=kind, xlim=(-0.8,0.8), ylim=(-0.45,0.9))
g.ax_joint.axis("equal")
plt.show()
Explanation: 다음 함수는 생성된 점들을 2차원 삼각형 위에서 볼 수 있도록 그려주는 함수이다.
End of explanation
X1 = np.random.rand(1000, 3)
X1 = X1/X1.sum(axis=1)[:, np.newaxis]
plot_triangle(X1, kind="scatter")
plot_triangle(X1, kind="hex")
Explanation: 만약 이 문제를 단순하게 생각하여 서로 독립인 0과 1사이의 유니폼 확률 변수를 3개 생성하고 이들의 합이 1이 되도록 크기를 정규화(normalize)하면 다음 그림과 같이 삼각형의 중앙 근처에 많은 확률 분포가 집중된다. 즉, 확률 변수가 골고루 분포되지 않는다.
End of explanation
X2 = sp.stats.dirichlet((1,1,1)).rvs(1000)
plot_triangle(X2, kind="scatter")
plot_triangle(X2, kind="hex")
Explanation: 그러나 $\alpha=(1,1,1)$인 디리클레 분포는 다음과 같이 골고루 샘플을 생성한다.
End of explanation
def project(x):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
return np.dstack([(x-n12).dot(m1), (x-n12).dot(m2)])[0]
def project_reverse(x):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
return x[:,0][:, np.newaxis] * m1 + x[:,1][:, np.newaxis] * m2 + n12
eps = np.finfo(float).eps * 10
X = project([[1-eps,0,0], [0,1-eps,0], [0,0,1-eps]])
import matplotlib.tri as mtri
triang = mtri.Triangulation(X[:,0], X[:,1], [[0, 1, 2]])
refiner = mtri.UniformTriRefiner(triang)
triang2 = refiner.refine_triangulation(subdiv=6)
XYZ = project_reverse(np.dstack([triang2.x, triang2.y, 1-triang2.x-triang2.y])[0])
pdf = sp.stats.dirichlet((1,1,1)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
pdf = sp.stats.dirichlet((3,4,2)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
pdf = sp.stats.dirichlet((16,24,14)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
Explanation: $\alpha$가 $(1,1,1)$이 아닌 경우에는 다음과 같이 특정 위치에 분포가 집중되도록 할 수 있다. 이 특성을 이용하면 다항 분포의 모수를 추정하는 베이지안 추정 문제에 응용할 수 있다.
End of explanation |
8,440 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Titanic Data Analysis
1. Introduction
In this project I will perform a data analysis on the sample Titanic dataset. The dataset contains
demographics and passenger information of 891 out of the 2224 passengers and crew members on board the Titanic.
The data was obtained at https
Step1: 2.2 Data Cleanup
The data cleanup procedure consists of the follwing steps
Step2: Here I want to identify duplicates in the data.
Step3: In the dataset there are no data duplicates.
2.3 Missing Values
In this subsection I want to find out how many missing values there are in the dataframe.
Step4: We can see that roughly 20% of the passengers in the dataset do not have a stated age, especialy the male passengers. This fact should be considered on examination of the question whether the age considering the gender determine the chances of survival.
3. Data Analysis
At first I want to gain a basic overview over the age distribution, the count of female and male as well as the count of survived and dead passengers.
Step5: The age distribution can be approximated as a gaussian distribution with a mean around 25-30 years.
In the second plot can be noticed that there were almost twice as much male as female passengers. The third plot shows that there were mored deads than survivals.
3.1 Did the gender determine the chances of survival?
Step6: According to the plot it is obvious that male passengers had a significant lower chance of survival in comparison to female passengers.
3.2 Did the social-economic status determine the chances of survival?
Step7: The left plot shows the tendency that a lower fare price decreases the ratio of survival. In the right plot a steady declination of the survival ration with lower passenger class can be observed. Im summary the social-economic status has indeed an impact on the chances of survival. Passengers with higher social-economic status did have a higher chances to survive.
3.3 Did age, regardless of gender, determine your chances of survival?
Step8: It can be observed that younger and older passengers regardless of their gender had an higher survival ratio than middle aged passengers.
3.4 Did the age considering the gender determine the chances of survival?
Step9: If we take the gender into account it can be seen that male passengers in their middle ages had a much lower ratio of survival than women. Male children had however still big chances for survival.
In oppositve female passengers had throughout all age groupes higher survival ratio than male passengers.
3.5 Did the number of children aboard per passenger determine the chances of survival? | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from collections import Counter
titanic=pd.read_csv("titanic-data.csv")
titanic.head()
Explanation: Titanic Data Analysis
1. Introduction
In this project I will perform a data analysis on the sample Titanic dataset. The dataset contains
demographics and passenger information of 891 out of the 2224 passengers and crew members on board the Titanic.
The data was obtained at https://www.kaggle.com/c/titanic/data.
In my analysis I will examine the factors what may have increased the chances of survival. I will particularly focus on the following questions:
Did the gender determine the chances of survival?
Did the social-economic status determine the chances of survival?
Did the age considering the gender determine the chances of survival?
Did age, regardless of gender, determine your chances of survival?
Did the number of children aboard per passenger determine the chances of survival?
2. Data Wrangling
2.1 Data Discription
The data contains following information:
survival: Survival (0 = No; 1 = Yes)
pclass: Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)
name: Name
sex: Gender
age: Age
sibsp: Number of Siblings/Spouses Aboard
parch: Number of Parents/Children Aboard
ticket: Ticket Number
fare: Passenger Fare
cabin: Cabin
embarked: Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton)
Here are the first five rows of the dataframe to give an simple overview over the data.
End of explanation
# Create new dataset without unwanted columns
titanic=titanic.drop(['Name','Ticket','Cabin','Embarked','SibSp'], axis=1)
titanic.head()
Explanation: 2.2 Data Cleanup
The data cleanup procedure consists of the follwing steps:
1. Removal of unnecessary data
2. Removal of duplicates
3. Determine missing values in the data
Based on the question I want to answer in the project some of the columns in the dataset will not be important in further
examination, therefore they will be removed. These columns are:
Name
Ticket
Cabin
Embarked
sibsp
Following code lines remove the mentioned rows and show the first five rows of the dataframe to show that the unnecessary data was removed.
End of explanation
# Identify if duplicates in the data do exists
titanic_duplicates = titanic.duplicated()
print('Number of duplicates: {}'.format(titanic_duplicates.sum()))
Explanation: Here I want to identify duplicates in the data.
End of explanation
# Calculating the number of missing values
titanic.isnull().sum()
# Determine the number of males and females with missing age in the dataset
missing_age_male = titanic[pd.isnull(titanic['Age'])]['Sex'] == 'male'
missing_age_female = titanic[pd.isnull(titanic['Age'])]['Sex'] == 'female'
print('Number of male passengers with missing age: {}'.format(missing_age_male.sum()))
print('Number of female passengers with missing age: {}'.format(missing_age_female.sum()))
Explanation: In the dataset there are no data duplicates.
2.3 Missing Values
In this subsection I want to find out how many missing values there are in the dataframe.
End of explanation
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(17, 5))
#Age distribution
ages=titanic["Age"].dropna()
#plt.figure(figsize=(7,7))
axes[0].hist(ages, bins=80, color='#377eb8',edgecolor = "Black")
axes[0].set_xlabel("Age/Years")
axes[0].set_ylabel('Count')
axes[0].set_title('Age distribution of the passengers')
#plt.show()
# Count of male and female passengers
gender=titanic['Sex']
counts = Counter(gender)
common = counts.most_common()
gender = [item[0] for item in common]
count = [item[1] for item in common]
axes[1].bar(np.arange(2), count, tick_label=gender, width=0.4, color='#377eb8',edgecolor = "Black")
axes[1].set_ylabel('Count')
axes[1].set_title('Number of female and male passengers.')
# Count of dead and survived passengers
survival=titanic['Survived']
counts=Counter(survival)
common = counts.most_common()
label=["Dead", "Survived"]
count=[item[1] for item in common]
axes[2].bar(np.arange(2), count, tick_label=label, width=0.4, color='#377eb8',edgecolor = "Black")
axes[2].set_ylabel('Count')
axes[2].set_title('Number of survived and dead passengers.')
plt.show()
Explanation: We can see that roughly 20% of the passengers in the dataset do not have a stated age, especialy the male passengers. This fact should be considered on examination of the question whether the age considering the gender determine the chances of survival.
3. Data Analysis
At first I want to gain a basic overview over the age distribution, the count of female and male as well as the count of survived and dead passengers.
End of explanation
# Group thy PassengerId by Gender und Survival
g=titanic.groupby(["Survived","Sex"])["PassengerId"]
# Count how many passengers died or survived dependent on the gender
survived_men=g.get_group((1,"male")).count()
survived_women=g.get_group((1,"female")).count()
dead_men=g.get_group((0,"male")).count()
dead_women=g.get_group((0,"female")).count()
# Group the PassengerId by gender and count the Id's dependent on the gender to find out the total number of women and men
g=titanic.groupby("Sex")["PassengerId"]
men_sum=float(g.get_group(("male")).count())
women_sum=float(g.get_group(("female")).count())
#Normalization of dead and survived passengers depentend on the gender
p2=survived=[survived_men/men_sum, survived_women/women_sum]
p1=dead=[dead_men/men_sum, dead_women/women_sum]
# Plot the survival by gender ration
plt.figure(figsize=(7,7))
N=2
ind = np.arange(N)
width = 0.35
bar1 = plt.bar(ind, survived, width,color='#377eb8', edgecolor = "Black")
bar2 = plt.bar(ind+width, dead, width,color='#e41a1c', edgecolor = "Black")
plt.ylabel('Ratio of passengers')
plt.title('Survival by Gender')
plt.xticks(ind+width/2, ['Men', "Female"])
plt.legend((bar2, bar1), ('Dead', 'Survived'))
plt.figure(num=None, figsize=(1, 1), dpi=80, facecolor='w', edgecolor='k')
plt.show()
Explanation: The age distribution can be approximated as a gaussian distribution with a mean around 25-30 years.
In the second plot can be noticed that there were almost twice as much male as female passengers. The third plot shows that there were mored deads than survivals.
3.1 Did the gender determine the chances of survival?
End of explanation
################################ Survival regarding the fare ##########################
# Create a dataframe of all fares
fares_df=titanic[["Fare", "Survived"]]
fares=titanic["Fare"]
# Create 20 fare ranges from 0 $ - 300 $ and count how many fares from fares_df belong to each range
num_bins=20
bar_width=300/float(num_bins)
fare_ranges_all=[]
for i in np.arange(0,num_bins,1):
fare_ranges_all.append(len([x for x in fares if i*bar_width <= x < (i+1)*bar_width]))
# Create a dataframe with fares of passengers who survived
survived_fares=fares_df.ix[(fares_df["Survived"]==1)]["Fare"]
# Determine how many fares of passengers who survived belong in each of the 20 ranges
fare_ranges_survived=[]
for i in np.arange(0,num_bins,1):
fare_ranges_survived.append(len([x for x in survived_fares if i*bar_width <= x < (i+1)*bar_width]))
# Handle the case in which a fare range does not contain any counts (to avoid devide by null error during normalization)
for n,i in enumerate(fare_ranges_all):
if i==0:
fare_ranges_all[n]=1
################################ Survival regarding the class ##########################
# Get the Groupby object
g=titanic.groupby(["Survived","Pclass"])
# Count the passengers from each class who not have survived
dead_class_1=g.get_group((0,1))["PassengerId"].count()
dead_class_2=g.get_group((0,2))["PassengerId"].count()
dead_class_3=g.get_group((0,3))["PassengerId"].count()
# Count the passengers from each class who have survived
survived_class_1=g.get_group((1,1))["PassengerId"].count()
survived_class_2=g.get_group((1,2))["PassengerId"].count()
survived_class_3=g.get_group((1,3))["PassengerId"].count()
# Get the Groupby object
g=titanic.groupby(["Pclass"])
# Count the passengers in each class
passengers_class1=float(g.get_group((1))["PassengerId"].count())
passengers_class2=float(g.get_group((2))["PassengerId"].count())
passengers_class3=float(g.get_group((3))["PassengerId"].count())
# Plot the fare-survival relation
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
fare_ranges_all=np.array(fare_ranges_all).astype(float)
#Normalize the number of fares in each range
normed_survived_fares=np.array(fare_ranges_survived)/fare_ranges_all
axes[0].bar(np.arange(0, 300,bar_width)+bar_width/2.0,normed_survived_fares,width=bar_width, align="center", edgecolor = "Black")
axes[0].set_title("Survival by Fare")
axes[0].set_xlabel("Fare price / $")
axes[0].set_ylabel("Ratio of passengers")
# Plot the class-survival relation
y1, y2, y3=dead_class_1/passengers_class1, dead_class_2/passengers_class2, dead_class_3/passengers_class3
z1, z2, z3=survived_class_1/passengers_class1, survived_class_2/passengers_class2, survived_class_3/passengers_class3
dead=[y1, y2, y3]
survived=[z1, z2, z3]
width=0.25
ind = np.arange(3)
bar1 = axes[1].bar(ind, survived, width,color='#377eb8', edgecolor = "Black")
bar2 = axes[1].bar(ind+width, dead, width,color='#e41a1c', edgecolor = "Black")
axes[1].legend((bar2, bar1), ('Dead', 'Survived'))
axes[1].set_title("Survival by Class")
axes[1].set_xlabel("Class")
axes[1].set_ylabel("Ratio of passengers")
axes[1].set_xticks(ind+width/2)
axes[1].set_xticklabels(["1", "2", "3"])
plt.show()
Explanation: According to the plot it is obvious that male passengers had a significant lower chance of survival in comparison to female passengers.
3.2 Did the social-economic status determine the chances of survival?
End of explanation
# Count the PassengerId grouped by the age and save it as dataframe
df=pd.DataFrame({'count' : titanic.groupby("Age")["PassengerId"].count()}).reset_index()
# Make a dictionary out of df, where age is the key and count is the value
passengers_by_age = dict(zip(df["Age"], df["count"]))
# Count the PassengerId that is grouped by survival and gernder and save it in a dataframe
df=pd.DataFrame({'count' : titanic.groupby( ["Survived","Age"])["PassengerId"].count()}).reset_index()
# New dataframe where all passengers survived
df2 = df.ix[(df['Survived'] == 1)]
# Make a dictionary where keys are the passenger age group and the values the normalized count of passengers in this age group
age_survived_norm={}
for index, row in df2.iterrows():
age_survived_norm.update(({row["Age"]:row["count"]/float(passengers_by_age[row["Age"]])}))
# Plot the results
plt.figure(figsize=(13,7))
plt.bar(list(age_survived_norm.keys()), list(age_survived_norm.values()), align='center', color="#377eb8")
plt.xlabel('Age / years')
plt.ylabel('Ratio of survived passengers')
plt.title('Survival of passengers by age')
plt.show()
Explanation: The left plot shows the tendency that a lower fare price decreases the ratio of survival. In the right plot a steady declination of the survival ration with lower passenger class can be observed. Im summary the social-economic status has indeed an impact on the chances of survival. Passengers with higher social-economic status did have a higher chances to survive.
3.3 Did age, regardless of gender, determine your chances of survival?
End of explanation
# Count the PassengerId grouped by age and gender and save ad dataframe
df=pd.DataFrame({'count' : titanic.groupby(["Age", "Sex"])["PassengerId"].count()}).reset_index()
# Take from df values which belong to men and save as new dataframe
df_male=df.ix[(df['Sex']=='male')]
# Take from df values which belong to women and save as new dataframe
df_women=df.ix[(df['Sex']=='female')]
# Create dictionary with age of men as the key and the count of men in this age group as value
male_by_age=dict(zip(df_male["Age"], df_male["count"]))
# Create dictionary with age of women as the key and the count of women in this age group as value
female_by_age=dict(zip(df_women["Age"], df_women["count"]))
#Count the PassengerId grouped by survival, age and gender and save as dataframe
df=pd.DataFrame({'count' : titanic.groupby( ["Survived","Age", "Sex"])["PassengerId"].count()}).reset_index()
#Create two dictionaries in which the keys are the age of men/women and the normalized count of men/women in this age group as value
male_by_age_survived={}
female_by_age_survived={}
for index, row in df.iterrows():
if row["Survived"]==1:
if row["Sex"]=="male":
male_by_age_survived.update(({row["Age"]:row["count"]/float(male_by_age[row["Age"]])}))
else:
female_by_age_survived.update(({row["Age"]:row["count"]/float(female_by_age[row["Age"]])}))
# Plot the results
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 5))
axes[0].bar(list(male_by_age_survived.keys()), list(male_by_age_survived.values()), align='center')
axes[0].set_xlabel('Age / years')
axes[0].set_ylabel('Ration of survived men')
axes[0].set_title('Survival of male passengers by age')
axes[1].bar(list(female_by_age_survived.keys()), list(female_by_age_survived.values()), align='center')
axes[1].set_xlabel('Age / years')
axes[1].set_ylabel('Ration of survived women')
axes[1].set_title('Survival of female passengers by age')
plt.show()
Explanation: It can be observed that younger and older passengers regardless of their gender had an higher survival ratio than middle aged passengers.
3.4 Did the age considering the gender determine the chances of survival?
End of explanation
# How many passengers do have how many children? Create new dataframe and transform it's columns to a dictionary
parch_count=pd.DataFrame({'count' : titanic.groupby("Parch")["PassengerId"].count()}).reset_index()
parch_count=dict(zip(parch_count["Parch"],parch_count["count"]))
# Same dataframe as above but also grouped by survival
df=pd.DataFrame({'count' : titanic.groupby(["Survived","Parch"])["PassengerId"].count()}).reset_index()
# Calculate the survival ratio per children aboard
surival_ratio={}
for index, row in df.iterrows():
if not row["Survived"]:
surival_ratio[row["Parch"]]=1-(row["count"]/float(parch_count[row["Parch"]]))
plt.figure(figsize=(10,6))
plt.bar(list(surival_ratio.keys()),list(surival_ratio.values()),color='#377eb8', edgecolor = "Black")
plt.title("Survival ratio per children aboard")
plt.xlabel("Number of children per passenger")
plt.ylabel("Ratio of survival")
plt.show()
Explanation: If we take the gender into account it can be seen that male passengers in their middle ages had a much lower ratio of survival than women. Male children had however still big chances for survival.
In oppositve female passengers had throughout all age groupes higher survival ratio than male passengers.
3.5 Did the number of children aboard per passenger determine the chances of survival?
End of explanation |
8,441 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using an SBML model
Getting started
Installing libraries
Before you start, you will need to install a couple of libraries
Step1: Sharing the data
If you set this variable to true, we will export some of the data, as either txt files or pickle files, and then you can import them into other notebooks to explore the data
Step2: Running an SBML model
If you have run your genome through RAST, you can download the SBML model and use that directly.
We have provided an SBML model of Citrobacter sedlakii that you can download and use. You can right-ctrl click on this link and save the SBML file in the same location you are running this iPython notebook.
We use this SBML model to demonstrate the key points of the FBA approach
Step3: Find all the reactions and identify those that are boundary reactions
We need a set of reactions to run in the model. In this case, we are going to run all the reactions in our SBML file. However, you can change this set if you want to knock out reactions, add reactions, or generally modify the model. We store those in the reactions_to_run set.
The boundary reactions refer to compounds that are secreted but then need to be removed from the reactions_to_run set. We usually include a consumption of those compounds that is open ended, as if they are draining away. We store those reactions in the uptake_secretion_reactions dictionary.
Step4: At this point, we can take a look at how many reactions are in the model, not counting the biomass reaction
Step5: Find all the compounds in the model, and filter out those that are secreted
We need to filter out uptake and secretion compounds from our list of all compounds before we can make a stoichiometric matrix.
Step6: Again, we can see how many compounds there are in the model.
Step7: And now we have the size of our stoichiometric matrix! Notice that the stoichiometric matrix is composed of the reactions that we are going to run and the compounds that are in those reactions (but not the uptake/secretion reactions and compounds).
Step8: Read the media file, and correct the media names
In our media directory, we have a lot of different media formulations, most of which we use with the Genotype-Phenotype project. For this example, we are going to use Lysogeny Broth (LB). There are many different formulations of LB, but we have included the recipe created by the folks at Argonne so that it is comparable with their analysis. You can download ArgonneLB.txt and put it in the same directory as this iPython notebook to run it.
Once we have read the file we need to correct the names in the compounds. Sometimes when compound names are exported to the SBML file they are modified slightly. This just corrects those names.
Step9: Set the reaction bounds for uptake/secretion compounds
The uptake and secretion compounds typically have reaction bounds that allow them to be consumed (i.e. diffuse away from the cell) but not produced. However, our media components can also increase in concentration (i.e. diffuse to the cell) and thus the bounds are set higher. Whenever you change the growth media, you also need to adjust the reaction bounds to ensure that the media can be consumed!
Step10: Run the FBA
Now that we have constructed our model, we can run the FBA!
Step11: Export the components of the model
This demonstrates how to export and import the components of this model, so you can do other things with it! | Python Code:
import sys
import os
import copy
import PyFBA
import pickle
Explanation: Using an SBML model
Getting started
Installing libraries
Before you start, you will need to install a couple of libraries:
The ModelSeedDatabase has all the biochemistry we'll need. You can install that with git clone.
The PyFBA library has detailed installation instructions. Don't be scared, its mostly just pip install.
(Optional) Also, get the SEED Servers as you can get a lot of information from them. You can install the git python repo from github. Make sure that the SEED_Servers_Python is in your PYTHONPATH.
We start with importing some modules that we are going to use.
We import sys so that we can use standard out and standard error if we have some error messages.<br>
We import copy so that we can make a deep copy of data structures for later comparisons.<br>
Then we import the PyFBA module to get started.
End of explanation
share_data = True
Explanation: Sharing the data
If you set this variable to true, we will export some of the data, as either txt files or pickle files, and then you can import them into other notebooks to explore the data
End of explanation
sbml = PyFBA.parse.parse_sbml_file("../example_data/Citrobacter/Citrobacter_sedlakii.sbml")
Explanation: Running an SBML model
If you have run your genome through RAST, you can download the SBML model and use that directly.
We have provided an SBML model of Citrobacter sedlakii that you can download and use. You can right-ctrl click on this link and save the SBML file in the same location you are running this iPython notebook.
We use this SBML model to demonstrate the key points of the FBA approach: defining the reactions, including the boundary, or drainflux, reactions; the compounds, including the drain compounds; the media; and the reaction bounds.
We'll take it step by step!
We start by parsing the model:
End of explanation
# Get a dict of reactions.
# The key is the reaction ID, and the value is a metabolism.reaction.Reaction object
reactions = sbml.reactions
reactions_to_run = set()
uptake_secretion_reactions = {}
biomass_equation = None
for r in reactions:
if 'biomass_equation' == r:
biomass_equation = reactions[r]
print(f"Our biomass equation is {biomass_equation.readable_name}")
continue
is_boundary = False
for c in reactions[r].all_compounds():
if c.uptake_secretion:
is_boundary = True
break
if is_boundary:
reactions[r].is_uptake_secretion = True
uptake_secretion_reactions[r] = reactions[r]
else:
reactions_to_run.add(r)
Explanation: Find all the reactions and identify those that are boundary reactions
We need a set of reactions to run in the model. In this case, we are going to run all the reactions in our SBML file. However, you can change this set if you want to knock out reactions, add reactions, or generally modify the model. We store those in the reactions_to_run set.
The boundary reactions refer to compounds that are secreted but then need to be removed from the reactions_to_run set. We usually include a consumption of those compounds that is open ended, as if they are draining away. We store those reactions in the uptake_secretion_reactions dictionary.
End of explanation
print(f"The biomass equation is {biomass_equation}")
print(f"There are {len(reactions)} reactions in the model")
print(f"There are {len(uptake_secretion_reactions)} uptake/secretion reactions in the model")
print(f"There are {len(reactions_to_run)} reactions to be run in the model")
if share_data:
with open('sbml_reactions.txt', 'w') as out:
for r in reactions:
out.write(f"{r}\n")
Explanation: At this point, we can take a look at how many reactions are in the model, not counting the biomass reaction:
End of explanation
all_compounds = sbml.compounds
# Filter for compounds that are boundary compounds
filtered_compounds = set()
for c in all_compounds:
if not c.uptake_secretion:
filtered_compounds.add(c)
Explanation: Find all the compounds in the model, and filter out those that are secreted
We need to filter out uptake and secretion compounds from our list of all compounds before we can make a stoichiometric matrix.
End of explanation
print(f"There are {len(all_compounds)} total compounds in the model")
print(f"There are {len(filtered_compounds)} compounds that are not involved in uptake and secretion")
Explanation: Again, we can see how many compounds there are in the model.
End of explanation
print(f"The stoichiometric matrix will be {len(reactions_to_run):,} reactions by {len(filtered_compounds):,} compounds")
Explanation: And now we have the size of our stoichiometric matrix! Notice that the stoichiometric matrix is composed of the reactions that we are going to run and the compounds that are in those reactions (but not the uptake/secretion reactions and compounds).
End of explanation
# Read the media file
#media = PyFBA.parse.read_media_file("/home/redwards/.local/lib/python3.9/site-packages/PyFBA-2.1-py3.9.egg/PyFBA/Biochemistry/media/ArgonneLB.txt")
# mediafile = "MOPS_NoC_L-Methionine"
mediafile = 'ArgonneLB'
# ediafile = 'MOPS_NoC_D-Glucose'
media = PyFBA.parse.pyfba_media(mediafile)
# Correct the names
media = sbml.correct_media(media)
print(f"The media has {len(media)} compounds")
Explanation: Read the media file, and correct the media names
In our media directory, we have a lot of different media formulations, most of which we use with the Genotype-Phenotype project. For this example, we are going to use Lysogeny Broth (LB). There are many different formulations of LB, but we have included the recipe created by the folks at Argonne so that it is comparable with their analysis. You can download ArgonneLB.txt and put it in the same directory as this iPython notebook to run it.
Once we have read the file we need to correct the names in the compounds. Sometimes when compound names are exported to the SBML file they are modified slightly. This just corrects those names.
End of explanation
# Adjust the lower bounds of uptake secretion reactions
# for things that are not in the media
mcr = 0
for u in uptake_secretion_reactions:
# just reset the bounds in case we change media and re-run this block
reactions[u].lower_bound = -1000.0
uptake_secretion_reactions[u].lower_bound = -1000.0
reactions[u].upper_bound = 1000.0
uptake_secretion_reactions[u].upper_bound = 1000.0
is_media_component = False
override = False
for c in uptake_secretion_reactions[u].all_compounds():
if c in media:
is_media_component = True
if is_media_component:
mcr += 1
else:
reactions[u].lower_bound = 0.0
uptake_secretion_reactions[u].lower_bound = 0.0
# these are the reactions that allow the media components to flux
# print(f"{u} {sbml.reactions[u].equation} ({sbml.reactions[u].lower_bound}, {sbml.reactions[u].upper_bound})")
print(f"There are {mcr} reactions (out of {len(uptake_secretion_reactions)}) with a media component")
Explanation: Set the reaction bounds for uptake/secretion compounds
The uptake and secretion compounds typically have reaction bounds that allow them to be consumed (i.e. diffuse away from the cell) but not produced. However, our media components can also increase in concentration (i.e. diffuse to the cell) and thus the bounds are set higher. Whenever you change the growth media, you also need to adjust the reaction bounds to ensure that the media can be consumed!
End of explanation
ms = PyFBA.model_seed.ModelData(compounds = filtered_compounds, reactions = reactions)
status, value, growth = PyFBA.fba.run_fba(ms, reactions_to_run, media, biomass_equation,
uptake_secretion_reactions, verbose=True)
print("The FBA completed with a flux value of {} --> growth: {}".format(value, growth))
Explanation: Run the FBA
Now that we have constructed our model, we can run the FBA!
End of explanation
if share_data:
pickle.dump(filtered_compounds, open('compounds.pickle', 'wb'))
pickle.dump(reactions, open('reactions.pickle', 'wb'))
pickle.dump(reactions_to_run, open('reactions_to_run.pickle', 'wb'))
pickle.dump(media, open('media.pickle', 'wb'))
pickle.dump(biomass_equation, open('sbml_biomass.pickle', 'wb'))
pickle.dump(uptake_secretion_reactions, open('uptake_secretion_reactions.pickle', 'wb'))
if share_data:
sbml_filtered_compounds = pickle.load(open('compounds.pickle', 'rb'))
sbml_reactions = pickle.load(open('reactions.pickle', 'rb'))
sbml_reactions_to_run = pickle.load(open('reactions_to_run.pickle', 'rb'))
sbml_media = pickle.load(open('media.pickle', 'rb'))
sbml_biomass_equation = pickle.load(open('sbml_biomass.pickle', 'rb'))
sbml_uptake_secretion_reactions = pickle.load(open('uptake_secretion_reactions.pickle', 'rb'))
ms = PyFBA.model_seed.ModelData(compounds = sbml_filtered_compounds, reactions = sbml_reactions)
status, value, growth = PyFBA.fba.run_fba(ms, sbml_reactions_to_run, sbml_media, sbml_biomass_equation,
sbml_uptake_secretion_reactions, verbose=True)
print("The FBA completed with a flux value of {} --> growth: {}".format(value, growth))
Explanation: Export the components of the model
This demonstrates how to export and import the components of this model, so you can do other things with it!
End of explanation |
8,442 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Project
Step1: Read in an Image
Step9: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are
Step10: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
Step11: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
Step12: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos
Step13: Let's try the one with the solid white lane on the right first ...
Step15: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
Step17: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
Step19: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! | Python Code:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
Import Packages
End of explanation
#reading in an image
#image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
#print('This image is:', type(image), 'with dimensions:', image.shape)
#plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
Explanation: Read in an Image
End of explanation
import math
def grayscale(img):
Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
Applies a Gaussian Noise kernel
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[0, 0, 255], thickness=10):
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
x_points = []
y_points = []
x_right = []
y_right = []
x_left = []
y_left = []
for line in lines:
for x1,y1,x2,y2 in line:
x_points.append(x1)
x_points.append(x2)
y_points.append(y1)
y_points.append(y2)
slope = ((y2 - y1) / (x2 - x1))
if (x2-x1) != 0:
if slope < -0.5 and slope > -0.8: #>= 0: #Right lane
#if slope < -0.2 and slope > -0.8: #Switch comments with the line above to also see the challenge output! not optimal, but it works fairly!
x_left.extend((x1, x2))
y_left.extend((y1, y2))
elif slope > 0.5 and slope < 0.8: #Left lane
#elif slope > 0.2 and slope < 0.8: #Switch comments with the line above to also see the challenge output! Be advised: Video 1 & 2 output quality will drop!
x_right.extend((x1, x2))
y_right.extend((y1, y2))
#for the left ---------------------------------------------------------
fit_line_left = np.polyfit(x_left, y_left,1)
fit_function_left = np.poly1d(fit_line_left)
min_x_left = min(x_left)
max_x_left = max(x_left)
# y = mx + c to calculate x values for desired y values 540 (bottom of picture) and 320, under horizon
left_y1 = 540
left_x1 = int(round((left_y1 - fit_line_left[1]) / fit_line_left[0]))
left_y2 = 320
left_x2 = int(round((left_y2 - fit_line_left[1]) / fit_line_left[0]))
point_1_left = (left_x1), (left_y1)
point_2_left = (left_x2), (left_y2)
# now draw the left line
cv2.line(img, point_1_left, point_2_left, color, thickness)
#for the right ---------------------------------------------------------
fit_line_right = np.polyfit(x_right, y_right,1)
fit_function_right = np.poly1d(fit_line_right)
min_y_right = min(y_right)
max_y_right = max(y_right)
# y = mx + c to calculate x values for desired y values
right_y1 = 540
right_x1 = int(round((right_y1 - fit_line_right[1]) / fit_line_right[0]))
right_y2 = 320
right_x2 = int(round((right_y2 - fit_line_right[1]) / fit_line_right[0]))
point_1_right = (right_x1), (right_y1)
point_2_right = (right_x2), (right_y2)
# now draw the right line
cv2.line(img, point_1_right, point_2_right, color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
return cv2.addWeighted(initial_img, α, img, β, λ)
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
import os
os.listdir("test_images/")
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
#Pipeline for images
#reading in an image
#image = mpimg.imread('test_images/solidWhiteCurve.jpg')
#image = mpimg.imread('test_images/solidWhiteRight.jpg')
image = mpimg.imread('test_images/solidYellowCurve.jpg')
#image = mpimg.imread('test_images/solidYellowCurve2.jpg')
#image = mpimg.imread('test_images/solidYellowLeft.jpg')
#image = mpimg.imread('test_images/whiteCarLaneSwitch.jpg')
#image = mpimg.imread('test_images/challenge.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
plt.show()
#make it gray and do gaussian and canny
gray = grayscale(image)
blur_gray = gaussian_blur(gray, 3)
edges = canny(blur_gray, 100, 200)
plt.imshow(edges, cmap='gray')
plt.show()
#make a mask and do hough transform
imshape = image.shape
vertices = np.array([[(0, imshape[0]),(430, 320), (500, 320), (imshape[1], imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
plt.imshow(masked_edges, cmap='gray')
plt.show()
hough_image = hough_lines(masked_edges, 3, np.pi/180, 30, 10, 5)
plt.imshow(hough_image, cmap='gray')
plt.show()
weighted_image = weighted_img(hough_image, image, α=0.8, β=1., λ=0.)
#show and save
plt.imshow(weighted_image, cmap='gray')
plt.show()
cv2.imwrite('test_images/result1.jpg', gray)
cv2.imwrite('test_images/result2.jpg', blur_gray)
cv2.imwrite('test_images/result3.jpg', edges)
cv2.imwrite('test_images/result4.jpg', masked_edges)
cv2.imwrite('test_images/result2.jpg', hough_image)
cv2.imwrite('test_images/result2.jpg', weighted_image)
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
End of explanation
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def pipeline(img):
# NOTE: The output you return should be a color image (3 channel) for processing video below
gray = grayscale(img)
blur_gray = gaussian_blur(gray, 3)
edges = canny(blur_gray, 100, 150)
imshape = img.shape
vertices = np.array([[(0, imshape[0]),(430, 320), (500, 320), (imshape[1], imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
hough_image = hough_lines(masked_edges, 3, np.pi/180, 30, 10, 5)
img = weighted_img(hough_image, img, α=0.8, β=1., λ=0.)
return img
def process_image(image):
result = pipeline(image)
return result
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(white_output))
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output))
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
challenge_output = 'test_videos_output/challenge.mp4'
clip3 = VideoFileClip('test_videos/challenge.mp4')
clip_resized = clip3.resize(height=540) # Resize clip to fit current pipeline
clip_resized.write_videofile("test_videos/challenge_resized.mp4", audio=False) #write the resized file to a new file
clip4 = VideoFileClip('test_videos/challenge_resized.mp4') #now read this file and do the math
challenge_clip = clip4.fl_image(process_image)
#It works if in the lines_draw function, the statement for the slope is lowered from 0.5 to 0.2 and from -0.5 to -0.2.
#If then also the max line lenght from hough transform is lowered to 15, the result is already quite ok
#Due to time constrains i didn't work it out but this could be smoothened with a frame-by-frame comparrisson
#If the treshold of the lowest slope is too high, the array turns up empty and the polyfit function won't work
#So if in case of an empty array, the previous one would be picked for that frame until a valid array value would be found in
#a next frame, it would be way more stable.
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(challenge_output))
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation |
8,443 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visual Model Selection with Yellowbrick
In this tutorial, we are going to look at scores for a variety of Scikit-Learn models and compare them using visual diagnostic tools from Yellowbrick in order to select the best model for our data.
About Yellowbrick
Yellowbrick is a new Python library that extends the Scikit-Learn API to incorporate visualizations into the machine learning workflow.
The Yellowbrick library is a diagnostic visualization platform for machine learning that allows data scientists to steer the model selection process. Yellowbrick extends the Scikit-Learn API with a new core object
Step4: Feature Extraction
Our data, including the target, is categorical. We will need to change these values to numeric ones for machine learning. In order to extract this from the dataset, we'll have to use Scikit-Learn transformers to transform our input dataset into something that can be fit to a model. Luckily, Sckit-Learn does provide a transformer for converting categorical labels into numeric integers
Step6: Modeling and Evaluation
Common metrics for evaluating classifiers
Precision is the number of correct positive results divided by the number of all positive results (e.g. How many of the mushrooms we predicted would be edible actually were?).
Recall is the number of correct positive results divided by the number of positive results that should have been returned (e.g. How many of the mushrooms that were poisonous did we accurately predict were poisonous?).
The F1 score is a measure of a test's accuracy. It considers both the precision and the recall of the test to compute the score. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst at 0.
precision = true positives / (true positives + false positives)
recall = true positives / (false negatives + true positives)
F1 score = 2 * ((precision * recall) / (precision + recall))
Now we're ready to make some predictions!
Let's build a way to evaluate multiple estimators -- first using traditional numeric scores (which we'll later compare to some visual diagnostics from the Yellowbrick library).
Step8: Preliminary Model Evaluation
Based on the results from the F1 scores above, which model is performing the best?
Visual Model Evaluation
Now let's refactor our model evaluation function to use Yellowbrick's ClassificationReport class, a model visualizer that displays the precision, recall, and F1 scores. This visual model analysis tool integrates numerical scores as well color-coded heatmap in order to support easy interpretation and detection, particularly the nuances of Type I and Type II error, which are very relevant (lifesaving, even) to our use case!
Type I error (or a "false positive") is detecting an effect that is not present (e.g. determining a mushroom is poisonous when it is in fact edible).
Type II error (or a "false negative") is failing to detect an effect that is present (e.g. believing a mushroom is edible when it is in fact poisonous). | Python Code:
import os
import pandas as pd
names = [
'class',
'cap-shape',
'cap-surface',
'cap-color'
]
mushrooms = os.path.join('data','agaricus-lepiota.txt')
dataset = pd.read_csv(mushrooms)
dataset.columns = names
dataset.head()
features = ['cap-shape', 'cap-surface', 'cap-color']
target = ['class']
X = dataset[features]
y = dataset[target]
Explanation: Visual Model Selection with Yellowbrick
In this tutorial, we are going to look at scores for a variety of Scikit-Learn models and compare them using visual diagnostic tools from Yellowbrick in order to select the best model for our data.
About Yellowbrick
Yellowbrick is a new Python library that extends the Scikit-Learn API to incorporate visualizations into the machine learning workflow.
The Yellowbrick library is a diagnostic visualization platform for machine learning that allows data scientists to steer the model selection process. Yellowbrick extends the Scikit-Learn API with a new core object: the Visualizer. Visualizers allow visual models to be fit and transformed as part of the Scikit-Learn Pipeline process, providing visual diagnostics throughout the transformation of high dimensional data.
To learn more about Yellowbrick, visit http://www.scikit-yb.org.
About the Data
This tutorial uses a version of the mushroom data set from the UCI Machine Learning Repository. Our objective is to predict if a mushroom is poisionous or edible based on its characteristics.
The data include descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family. Each species was identified as definitely edible, definitely poisonous, or of unknown edibility and not recommended (this latter class was combined with the poisonous one).
Our file, "agaricus-lepiota.txt," contains information for 3 nominally valued attributes and a target value from 8124 instances of mushrooms (4208 edible, 3916 poisonous).
Let's load the data with Pandas.
End of explanation
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
class EncodeCategorical(BaseEstimator, TransformerMixin):
Encodes a specified list of columns or all columns if None.
def __init__(self, columns=None):
self.columns = [col for col in columns]
self.encoders = None
def fit(self, data, target=None):
Expects a data frame with named columns to encode.
# Encode all columns if columns is None
if self.columns is None:
self.columns = data.columns
# Fit a label encoder for each column in the data frame
self.encoders = {
column: LabelEncoder().fit(data[column])
for column in self.columns
}
return self
def transform(self, data):
Uses the encoders to transform a data frame.
output = data.copy()
for column, encoder in self.encoders.items():
output[column] = encoder.transform(data[column])
return output
Explanation: Feature Extraction
Our data, including the target, is categorical. We will need to change these values to numeric ones for machine learning. In order to extract this from the dataset, we'll have to use Scikit-Learn transformers to transform our input dataset into something that can be fit to a model. Luckily, Sckit-Learn does provide a transformer for converting categorical labels into numeric integers: sklearn.preprocessing.LabelEncoder. Unfortunately it can only transform a single vector at a time, so we'll have to adapt it in order to apply it to multiple columns.
End of explanation
from sklearn.metrics import f1_score
from sklearn.pipeline import Pipeline
def model_selection(X, y, estimator):
Test various estimators.
y = LabelEncoder().fit_transform(y.values.ravel())
model = Pipeline([
('label_encoding', EncodeCategorical(X.keys())),
('one_hot_encoder', OneHotEncoder()),
('estimator', estimator)
])
# Instantiate the classification model and visualizer
model.fit(X, y)
expected = y
predicted = model.predict(X)
# Compute and return the F1 score (the harmonic mean of precision and recall)
return (f1_score(expected, predicted))
# Try them all!
from sklearn.svm import LinearSVC, NuSVC, SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegressionCV, LogisticRegression, SGDClassifier
from sklearn.ensemble import BaggingClassifier, ExtraTreesClassifier, RandomForestClassifier
model_selection(X, y, LinearSVC())
model_selection(X, y, NuSVC())
model_selection(X, y, SVC())
model_selection(X, y, SGDClassifier())
model_selection(X, y, KNeighborsClassifier())
model_selection(X, y, LogisticRegressionCV())
model_selection(X, y, LogisticRegression())
model_selection(X, y, BaggingClassifier())
model_selection(X, y, ExtraTreesClassifier())
model_selection(X, y, RandomForestClassifier())
Explanation: Modeling and Evaluation
Common metrics for evaluating classifiers
Precision is the number of correct positive results divided by the number of all positive results (e.g. How many of the mushrooms we predicted would be edible actually were?).
Recall is the number of correct positive results divided by the number of positive results that should have been returned (e.g. How many of the mushrooms that were poisonous did we accurately predict were poisonous?).
The F1 score is a measure of a test's accuracy. It considers both the precision and the recall of the test to compute the score. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst at 0.
precision = true positives / (true positives + false positives)
recall = true positives / (false negatives + true positives)
F1 score = 2 * ((precision * recall) / (precision + recall))
Now we're ready to make some predictions!
Let's build a way to evaluate multiple estimators -- first using traditional numeric scores (which we'll later compare to some visual diagnostics from the Yellowbrick library).
End of explanation
from sklearn.pipeline import Pipeline
from yellowbrick.classifier import ClassificationReport
def visual_model_selection(X, y, estimator):
Test various estimators.
y = LabelEncoder().fit_transform(y.values.ravel())
model = Pipeline([
('label_encoding', EncodeCategorical(X.keys())),
('one_hot_encoder', OneHotEncoder()),
('estimator', estimator)
])
# Instantiate the classification model and visualizer
visualizer = ClassificationReport(model, classes=['edible', 'poisonous'])
visualizer.fit(X, y)
visualizer.score(X, y)
visualizer.poof()
visual_model_selection(X, y, LinearSVC())
visual_model_selection(X, y, NuSVC())
visual_model_selection(X, y, SVC())
visual_model_selection(X, y, SGDClassifier())
visual_model_selection(X, y, KNeighborsClassifier())
visual_model_selection(X, y, LogisticRegressionCV())
visual_model_selection(X, y, LogisticRegression())
visual_model_selection(X, y, BaggingClassifier())
visual_model_selection(X, y, ExtraTreesClassifier())
visual_model_selection(X, y, RandomForestClassifier())
Explanation: Preliminary Model Evaluation
Based on the results from the F1 scores above, which model is performing the best?
Visual Model Evaluation
Now let's refactor our model evaluation function to use Yellowbrick's ClassificationReport class, a model visualizer that displays the precision, recall, and F1 scores. This visual model analysis tool integrates numerical scores as well color-coded heatmap in order to support easy interpretation and detection, particularly the nuances of Type I and Type II error, which are very relevant (lifesaving, even) to our use case!
Type I error (or a "false positive") is detecting an effect that is not present (e.g. determining a mushroom is poisonous when it is in fact edible).
Type II error (or a "false negative") is failing to detect an effect that is present (e.g. believing a mushroom is edible when it is in fact poisonous).
End of explanation |
8,444 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolation Exercise 2
Step1: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain
Step2: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain
Step3: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
Explanation: Interpolation Exercise 2
End of explanation
# YOUR CODE HERE
five_1=np.ones(11)*-5
four_1=np.ones(2)*-4
three_1=np.ones(2)*-3
two_1=np.ones(2)*-2
one_1=np.ones(2)*-1
zero=np.ones(3)*0
five=np.ones(11)*5
four=np.ones(2)*4
three=np.ones(2)*3
two=np.ones(2)*2
one=np.ones(2)*1
y=np.linspace(-5,5,11)
norm=np.array((-5,5))
mid=np.array((-5,0,5))
x=np.hstack((five_1,four_1,three_1,two_1,one_1,zero,one,two,three,four,five))
y=np.hstack((y,norm,norm,norm,norm,mid,norm,norm,norm,norm,y))
def func(x,y):
t=np.zeros(len(x))
t[(len(x)/2)]=1
return t
f=func(x,y)
f
#The following plot should show the points on the boundary and the single point in the interior:
fig=plt.figure()
plt.scatter(x, y);
plt.grid()
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
Explanation: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain:
The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$.
The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points.
The value of $f$ is known at a single interior point: $f(0,0)=1.0$.
The function $f$ is not known at any other points.
Create arrays x, y, f:
x should be a 1d array of the x coordinates on the boundary and the 1 interior point.
y should be a 1d array of the y coordinates on the boundary and the 1 interior point.
f should be a 1d array of the values of f at the corresponding x and y coordinates.
You might find that np.hstack is helpful.
End of explanation
# YOUR CODE HERE
from scipy.interpolate import interp2d
xnew=np.linspace(-5,5,100)
ynew=np.linspace(-5,5,100)
Xnew, Ynew = np.meshgrid(xnew,ynew)
Fnew=griddata((x,y),f,(Xnew,Ynew),method='cubic')
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
Explanation: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain:
xnew and ynew should be 1d arrays with 100 points between $[-5,5]$.
Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid.
Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew).
Use cubic spline interpolation.
End of explanation
# YOUR CODE HERE
plt.figure(figsize=(6,6))
cont=plt.contour(Xnew,Ynew,Fnew, colors=('k','k'))
plt.title("Contour Map of F(x)")
plt.ylabel("Y-Axis")
plt.xlabel('X-Axis')
# plt.colorbar()
plt.clabel(cont, inline=1, fontsize=10)
plt.xlim(-5.5,5.5);
plt.ylim(-5.5,5.5);
# plt.grid()
assert True # leave this to grade the plot
Explanation: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
End of explanation |
8,445 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
automaton.star(algo = "auto")
Build an automaton that recognizes the Kleene star of the input automaton.
The algorithm has to be one of these
Step1: This is what the general algorithm for star outputs, given an automaton A and s being a new state.
The transition from s to A represents each initial transition of A.
The transition from A to s represents each final transition of A.
Step2: Examples
Standard
Step3: General | Python Code:
import vcsn
Explanation: automaton.star(algo = "auto")
Build an automaton that recognizes the Kleene star of the input automaton.
The algorithm has to be one of these:
"general": general star, no additional preconditions.
"standard": standard star.
"auto": default parameter, same as "standard" if parameters fit the standard preconditions, "general" otherwise.
Preconditions:
- "standard": automaton has to be standard.
Postconditions:
- "standard": the result automaton is standard.
- "general": the context of the result automaton is nullable.
See also:
- automaton.multiply
End of explanation
%%automaton a
context = "lan_char, b"
$ -> s
s -> A \e
A -> s \e
s -> $
ctx = vcsn.context('lal_char, q')
aut = lambda e: ctx.expression(e).standard()
Explanation: This is what the general algorithm for star outputs, given an automaton A and s being a new state.
The transition from s to A represents each initial transition of A.
The transition from A to s represents each final transition of A.
End of explanation
aut('a+b').star("standard")
Explanation: Examples
Standard
End of explanation
aut('a+b').star("general")
Explanation: General
End of explanation |
8,446 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Senior Income and Home Value Distributions For San Diego County
This package extracts the home value and household income for households in San DIego county with one or more household members aged 65 or older. . The base data is from the 2015 5 year PUMS sample, from IPUMS<sup>1</sup>. The primary dataset variables used are
Step1: Source Data
The PUMS data is a sample, so both household and person records have weights. We use those weights to replicate records. We are not adjusting the values for CPI, since we don't have a CPI for 2015, and because the medians for income comes out pretty close to those from the 2015 5Y ACS.
The HHINCOME and VALUEH have the typical distributions for income and home values, both of which look like Poisson distributions.
Step2: Procedure
After extracting the data for HHINCOME and VALUEH, we rank both values and then quantize the rankings into 10 groups, 0 through 9, hhincome_group and valueh_group. The HHINCOME variable correlates with VALUEH at .36, and the quantized rankings hhincome_group and valueh_group correlate at .38.
Initial attempts were made to fit curves to the income and home value distributions, but it is very difficult to find well defined models that fit real income distributions. Bordley (bordley) analyzes the fit for 15 different distributions, reporting success with variations of the generalized beta distribution, gamma and Weibull. Majumder (majumder) proposes a four parameter model with variations for special cases. None of these models were considered well established enough to fit within the time contraints for the project, so this analysis will use empirical distributions that can be scale to fit alternate parameters.
Step3: Then, we group the dataset by valueh_group and collect all of the income values for each group. These groups have different distributions, with the lower numbered group shewing to the left and the higher numbered group skewing to the right.
To use these groups in a simulation, the user would select a group for a subject's home value, then randomly select an income in that group. When this is done many times, the original VALUEH correlates to the new distribution ( here, as t_income ) at .33, reasonably similar to the original correlations.
Step4: A scatter matrix show similar structure for VALUEH and t_income.
Step5: The simulated incomes also have similar statistics to the original incomes. However, the median income is high. In San Diego county, the median household income for householders 65 and older in the 2015 5 year ACS about \$51K, versus \$56K here. For home values, the mean home value for 65+ old homeowners is \$468K in the 5 year ACS, vs \$510K here.
Step6: Bibliography
Step7: Create a new KDE distribution, based on the home values, including only home values ( actually KDE supports ) between $130,000 and $1.5M.
Step8: Overlay the prior plot with the histogram of the original values. We're using np.histogram to make the histograph, so it appears as a line chart.
Step9: Show an a home value curve, interpolated to the same values as the distribution. The two curves should be co-incident.
Step10: Now, interpolate to the values for the county, which shifts the curve right.
Step11: Here is an example of creating an interpolated distribution, then generating a synthetic distribution from it. | Python Code:
%matplotlib inline
%load_ext metatab
%load_ext autoreload
%autoreload 2
%mt_lib_dir lib
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import metatab as mt
import seaborn as sns; sns.set(color_codes=True)
import sqlite3
from IPython.display import display_html, HTML, display
import statsmodels as sm
from statsmodels.nonparametric.kde import KDEUnivariate
from scipy import integrate, stats
from incomedist import *
from multikde import MultiKde
plt.rcParams['figure.figsize']=(6,6)
%mt_open_package
!pwd
Explanation: Senior Income and Home Value Distributions For San Diego County
This package extracts the home value and household income for households in San DIego county with one or more household members aged 65 or older. . The base data is from the 2015 5 year PUMS sample, from IPUMS<sup>1</sup>. The primary dataset variables used are: HHINCOME and VALUEH.
This extract is intended for analysis of senior issues in San Diego County, so the record used are further restricted with these filters:
WHERE AGE > = 65
HHINCOME < 9999999
VALUEH < 9999999
STATEFIP = 6 ( California )
COUNTYFIPS = 73 ( San Diego County )
The limits on the HHINCOME and VALUEH variables eliminate top coding.
This analysis used the IPUMS (ipums) data
End of explanation
# Check the weights for the whole file to see if they sum to the number
# of households and people in the county. They don't, but the sum of the weights for households is close,
# 126,279,060 vs about 116M housholds
con = sqlite3.connect("ipums.sqlite")
wt = pd.read_sql_query("SELECT YEAR, DATANUM, SERIAL, HHWT, PERNUM, PERWT FROM ipums "
"WHERE PERNUM = 1 AND YEAR = 2015", con)
wt.drop(0, inplace=True)
nd_s = wt.drop_duplicates(['YEAR', 'DATANUM','SERIAL'])
country_hhwt_sum = nd_s[nd_s.PERNUM == 1]['HHWT'].sum()
len(wt), len(nd_s), country_hhwt_sum
import sqlite3
# PERNUM = 1 ensures only record for each household
con = sqlite3.connect("ipums.sqlite")
senior_hh = pd.read_sql_query(
"SELECT DISTINCT SERIAL, HHWT, PERWT, HHINCOME, VALUEH "
"FROM ipums "
"WHERE "
# "AGE >= 65 AND "
"HHINCOME < 9999999 AND VALUEH < 9999999 AND "
"STATEFIP = 6 AND COUNTYFIPS=73 ", con)
# Since we're doing a probabilistic simulation, the easiest way to deal with the weight is just to repeat rows.
# However, adding the weights doesn't change the statistics much, so they are turned off now, for speed.
def generate_data():
for index, row in senior_hh.drop_duplicates('SERIAL').iterrows():
#for i in range(row.HHWT):
yield (row.HHINCOME, row.VALUEH)
incv = pd.DataFrame(list(generate_data()), columns=['HHINCOME', 'VALUEH'])
sns.jointplot(x="HHINCOME", y="VALUEH", marker='.', scatter_kws={'alpha': 0.1}, data=incv, kind='reg');
from matplotlib.ticker import FuncFormatter
fig = plt.figure(figsize = (20,12))
ax = fig.add_subplot(111)
fig.suptitle("Distribution Plot of Home Values in San Diego County\n"
"( Truncated at $2.2M )", fontsize=18)
sns.distplot(incv.VALUEH[incv.VALUEH <2200000], ax=ax);
ax.set_xlabel('Home Value ($)', fontsize=14)
ax.set_ylabel('Density', fontsize=14);
ax.get_xaxis().set_major_formatter(FuncFormatter(lambda x, p: format(int(x), ',')))
from matplotlib.ticker import FuncFormatter
fig = plt.figure(figsize = (20,12))
ax = fig.add_subplot(111)
fig.suptitle("Distribution Plot of Home Values in San Diego County\n"
"( Truncated at $2.2M )", fontsize=18)
sns.kdeplot(incv.VALUEH[incv.VALUEH <2200000], ax=ax);
sns.kdeplot(incv.VALUEH[incv.VALUEH <1900000]+300000, ax=ax);
ax.set_xlabel('Home Value ($)', fontsize=14)
ax.set_ylabel('Density', fontsize=14);
ax.get_xaxis().set_major_formatter(FuncFormatter(lambda x, p: format(int(x), ',')))
Explanation: Source Data
The PUMS data is a sample, so both household and person records have weights. We use those weights to replicate records. We are not adjusting the values for CPI, since we don't have a CPI for 2015, and because the medians for income comes out pretty close to those from the 2015 5Y ACS.
The HHINCOME and VALUEH have the typical distributions for income and home values, both of which look like Poisson distributions.
End of explanation
incv['valueh_rank'] = incv.rank()['VALUEH']
incv['valueh_group'] = pd.qcut(incv.valueh_rank, 10, labels=False )
incv['hhincome_rank'] = incv.rank()['HHINCOME']
incv['hhincome_group'] = pd.qcut(incv.hhincome_rank, 10, labels=False )
incv[['HHINCOME', 'VALUEH', 'hhincome_group', 'valueh_group']] .corr()
from metatab.pands import MetatabDataFrame
odf = MetatabDataFrame(incv)
odf.name = 'income_homeval'
odf.title = 'Income and Home Value Records for San Diego County'
odf.HHINCOME.description = 'Household income'
odf.VALUEH.description = 'Home value'
odf.valueh_rank.description = 'Rank of the VALUEH value'
odf.valueh_group.description = 'The valueh_rank value quantized into 10 bins, from 0 to 9'
odf.hhincome_rank.description = 'Rank of the HHINCOME value'
odf.hhincome_group.description = 'The hhincome_rank value quantized into 10 bins, from 0 to 9'
%mt_add_dataframe odf --materialize
Explanation: Procedure
After extracting the data for HHINCOME and VALUEH, we rank both values and then quantize the rankings into 10 groups, 0 through 9, hhincome_group and valueh_group. The HHINCOME variable correlates with VALUEH at .36, and the quantized rankings hhincome_group and valueh_group correlate at .38.
Initial attempts were made to fit curves to the income and home value distributions, but it is very difficult to find well defined models that fit real income distributions. Bordley (bordley) analyzes the fit for 15 different distributions, reporting success with variations of the generalized beta distribution, gamma and Weibull. Majumder (majumder) proposes a four parameter model with variations for special cases. None of these models were considered well established enough to fit within the time contraints for the project, so this analysis will use empirical distributions that can be scale to fit alternate parameters.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
mk = MultiKde(odf, 'valueh_group', 'HHINCOME')
fig,AX = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(15,15))
incomes = [30000,
40000,
50000,
60000,
70000,
80000,
90000,
100000,
110000]
for mi, ax in zip(incomes, AX.flatten()):
s, d, icdf, g = mk.make_kde(mi)
syn_d = mk.syn_dist(mi, 10000)
syn_d.plot.hist(ax=ax, bins=40, title='Median Income ${:0,.0f}'.format(mi), normed=True, label='Generated')
ax.plot(s,d, lw=2, label='KDE')
fig.suptitle('Income Distributions By Median Income\nKDE and Generated Distribution')
plt.legend(loc='upper left')
plt.show()
import matplotlib.pyplot as plt
import numpy as np
mk = MultiKde(odf, 'valueh_group', 'HHINCOME')
fig = plt.figure(figsize = (20,12))
ax = fig.add_subplot(111)
incomes = [30000,
40000,
50000,
60000,
70000,
80000,
90000,
100000,
110000]
for mi in incomes:
s, d, icdf, g = mk.make_kde(mi)
syn_d = mk.syn_dist(mi, 10000)
#syn_d.plot.hist(ax=ax, bins=40, normed=True, label='Generated')
ax.plot(s,d, lw=2, label=str(mi))
fig.suptitle('Income Distributions By Median Income\nKDE and Generated Distribution\n< $250,000')
plt.legend(loc='upper left')
ax.set_xlim([0,250000])
plt.show()
df_kde = incv[ (incv.HHINCOME <200000) & (incv.VALUEH < 1000000) ]
ax = sns.kdeplot(df_kde.HHINCOME, df_kde.VALUEH, cbar=True)
Explanation: Then, we group the dataset by valueh_group and collect all of the income values for each group. These groups have different distributions, with the lower numbered group shewing to the left and the higher numbered group skewing to the right.
To use these groups in a simulation, the user would select a group for a subject's home value, then randomly select an income in that group. When this is done many times, the original VALUEH correlates to the new distribution ( here, as t_income ) at .33, reasonably similar to the original correlations.
End of explanation
t = incv.copy()
t['t_income'] = mk.syn_dist(t.HHINCOME.median(), len(t))
t[['HHINCOME','VALUEH','t_income']].corr()
sns.pairplot(t[['VALUEH','HHINCOME','t_income']]);
Explanation: A scatter matrix show similar structure for VALUEH and t_income.
End of explanation
display(HTML("<h3>Descriptive Stats</h3>"))
t[['VALUEH','HHINCOME','t_income']].describe()
display(HTML("<h3>Correlations</h3>"))
t[['VALUEH','HHINCOME','t_income']].corr()
Explanation: The simulated incomes also have similar statistics to the original incomes. However, the median income is high. In San Diego county, the median household income for householders 65 and older in the 2015 5 year ACS about \$51K, versus \$56K here. For home values, the mean home value for 65+ old homeowners is \$468K in the 5 year ACS, vs \$510K here.
End of explanation
%mt_bibliography
# Tests
Explanation: Bibliography
End of explanation
s,d = make_prototype(incv.VALUEH.astype(float), 130_000, 1_500_000)
plt.plot(s,d)
Explanation: Create a new KDE distribution, based on the home values, including only home values ( actually KDE supports ) between $130,000 and $1.5M.
End of explanation
v = incv.VALUEH.astype(float).sort_values()
#v = v[ ( v > 60000 ) & ( v < 1500000 )]
hist, bin_edges = np.histogram(v, bins=100, density=True)
bin_middles = 0.5*(bin_edges[1:] + bin_edges[:-1])
bin_width = bin_middles[1] - bin_middles[0]
assert np.isclose(sum(hist*bin_width),1) # == 1 b/c density==True
hist, bin_edges = np.histogram(v, bins=100) # Now, without 'density'
# And, get back to the counts, but now on the KDE
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(s,d * sum(hist*bin_width));
ax.plot(bin_middles, hist);
Explanation: Overlay the prior plot with the histogram of the original values. We're using np.histogram to make the histograph, so it appears as a line chart.
End of explanation
def plot_compare_curves(p25, p50, p75):
fig = plt.figure(figsize = (20,12))
ax = fig.add_subplot(111)
sp, dp = interpolate_curve(s, d, p25, p50, p75)
ax.plot(pd.Series(s), d, color='black');
ax.plot(pd.Series(sp), dp, color='red');
# Re-input the quantiles for the KDE
# Curves should be co-incident
plot_compare_curves(2.800000e+05,4.060000e+05,5.800000e+05)
Explanation: Show an a home value curve, interpolated to the same values as the distribution. The two curves should be co-incident.
End of explanation
# Values for SD County home values
plot_compare_curves(349100.0,485900.0,703200.0)
Explanation: Now, interpolate to the values for the county, which shifts the curve right.
End of explanation
sp, dp = interpolate_curve(s, d, 349100.0,485900.0,703200.0)
v = syn_dist(sp, dp, 10000)
plt.hist(v, bins=100);
pd.Series(v).describe()
Explanation: Here is an example of creating an interpolated distribution, then generating a synthetic distribution from it.
End of explanation |
8,447 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Feature Engineering in BQML
Learning Objectives
Evaluate the model
Extract temporal features, feature cross temporal features
Apply ML.FEATURE_CROSS to categorical features
Create a Euclidian feature column, feature cross coordinate features
Apply the BUCKETIZE function, TRANSFORM clause, L2 Regularization
Introduction
In this lab, we utilize feature engineering to improve the prediction of the fare amount for a taxi ride in New York City. We will use BigQuery ML to build a taxifare prediction model, using feature engineering to improve and create a final model. By continuing the utilization of feature engineering to improve the prediction of the fare amount for a taxi ride in New York City by reducing the RMSE.
In this Notebook, we perform a feature cross using BigQuery's ML.FEATURE_CROSS, derive coordinate features, feature cross coordinate features, clean up the code, apply the BUCKETIZE function, the TRANSFORM clause, L2 Regularization, and evaluate model performance throughout the process.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Set up environment variables and load necessary libraries
Step1: Note
Step2: The source dataset
Our dataset is hosted in BigQuery. The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict.
Create a BigQuery Dataset
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called feat_eng if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
Step3: Create the training data table
Since there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query
Step4: Verify table creation
Verify that you created the dataset.
Step5: Baseline Model
Step6: Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results.
You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes.
Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Evaluate the baseline model
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data.
NOTE
Step7: Tip
Step8: NOTE
Step9: Model 1
Step10: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
Step11: Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
Step12: Model 2
Step13: Model 3
Step14: Model 4
Step15: Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
Step16: Sliding down the slope toward a loss minimum (reduced taxi fare)!
Our fourth model above gives us an RMSE of 9.65 for estimating fares. Recall our heuristic benchmark was 8.29. This may be the result of feature crossing. Let's apply more feature engineering techniques to see if we can't get this loss metric lower!
Model 5
Step17: Next, two distinct SQL statements show metrics for model_5.
Step18: Model 6
Step19: Next, we evaluate model_6.
Step20: Code Clean Up
Exercise
Step21: BQML's Pre-processing functions
Step22: Next, we evaluate model_7.
Step23: Final Model
Step24: Next, we evaluate the final model.
Step25: Predictive Model
Now that you have evaluated your model, the next step is to use it to predict an outcome. You use your model to predict the taxifare amount.
The ML.PREDICT function is used to predict results using your model
Step26: Lab Summary | Python Code:
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
# Install the Google Cloud BigQuery
!pip install --user google-cloud-bigquery==1.25.0
Explanation: Advanced Feature Engineering in BQML
Learning Objectives
Evaluate the model
Extract temporal features, feature cross temporal features
Apply ML.FEATURE_CROSS to categorical features
Create a Euclidian feature column, feature cross coordinate features
Apply the BUCKETIZE function, TRANSFORM clause, L2 Regularization
Introduction
In this lab, we utilize feature engineering to improve the prediction of the fare amount for a taxi ride in New York City. We will use BigQuery ML to build a taxifare prediction model, using feature engineering to improve and create a final model. By continuing the utilization of feature engineering to improve the prediction of the fare amount for a taxi ride in New York City by reducing the RMSE.
In this Notebook, we perform a feature cross using BigQuery's ML.FEATURE_CROSS, derive coordinate features, feature cross coordinate features, clean up the code, apply the BUCKETIZE function, the TRANSFORM clause, L2 Regularization, and evaluate model performance throughout the process.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Set up environment variables and load necessary libraries
End of explanation
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
Explanation: Note: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
End of explanation
%%bash
# Create a BigQuery dataset for feat_eng if it doesn't exist
datasetexists=$(bq ls -d | grep -w feat_eng)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: feat_eng"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:feat_eng
echo "\nHere are your current datasets:"
bq ls
fi
Explanation: The source dataset
Our dataset is hosted in BigQuery. The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict.
Create a BigQuery Dataset
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called feat_eng if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
End of explanation
%%bigquery
# Creating the table in our dataset.
CREATE OR REPLACE TABLE
feat_eng.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
passenger_count*1.0 AS passengers,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 1
AND fare_amount >= 2.5
AND passenger_count > 0
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
Explanation: Create the training data table
Since there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query: This clause allows us to TRAIN a portion of the data (e.g. one hundred thousand rows versus one million rows), which keeps your query costs down. If you need a refresher on using MOD() for repeatable splits see this post.
Note: The dataset in the create table code below is the one created previously, e.g. "feat_eng". The table name is "feateng_training_data". Run the query to create the table.
End of explanation
%%bigquery
# LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT
*
FROM
feat_eng.feateng_training_data
LIMIT
0
Explanation: Verify table creation
Verify that you created the dataset.
End of explanation
%%bigquery
# Creating the baseline model
CREATE OR REPLACE MODEL
feat_eng.baseline_model OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
Explanation: Baseline Model: Create the baseline model
Next, you create a linear regression baseline model with no feature engineering. Recall that a model in BigQuery ML represents what an ML system has learned from the training data. A baseline model is a solution to a problem without applying any machine learning techniques.
When creating a BQML model, you must specify the model type (in our case linear regression) and the input label (fare_amount). Note also that we are using the training data table as the data source.
Now we create the SQL statement to create the baseline model.
End of explanation
%%bigquery
# Eval statistics on the held out data.
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.baseline_model)
Explanation: Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results.
You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes.
Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Evaluate the baseline model
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data.
NOTE: The results are also displayed in the BigQuery Cloud Console under the Evaluation tab.
Review the learning and eval statistics for the baseline_model.
End of explanation
%%bigquery
# TODO 1
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.baseline_model)
Explanation: Tip: Make sure to delete the "# TODO 1" when you are about to run the cell.
End of explanation
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.baseline_model)
Explanation: NOTE: Because you performed a linear regression, the results include the following columns:
mean_absolute_error
mean_squared_error
mean_squared_log_error
median_absolute_error
r2_score
explained_variance
Resource for an explanation of the Regression Metrics.
Mean squared error (MSE) - Measures the difference between the values our model predicted using the test set and the actual values. You can also think of it as the distance between your regression (best fit) line and the predicted values.
Root mean squared error (RMSE) - The primary evaluation metric for this ML problem is the root mean-squared error. RMSE measures the difference between the predictions of a model, and the observed values. A large RMSE is equivalent to a large average error, so smaller values of RMSE are better. One nice property of RMSE is that the error is given in the units being measured, so you can tell very directly how incorrect the model might be on unseen data.
R2: An important metric in the evaluation results is the R2 score. The R2 score is a statistical measure that determines if the linear regression predictions approximate the actual data. Zero (0) indicates that the model explains none of the variability of the response data around the mean. One (1) indicates that the model explains all the variability of the response data around the mean.
Next, we write a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
End of explanation
%%bigquery
# Creating the model from benchmark model and extract the Days of Week.
CREATE OR REPLACE MODEL
feat_eng.model_1 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
# TODO 2
EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS dayofweek,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
Explanation: Model 1: EXTRACT dayofweek from the pickup_datetime feature.
As you recall, dayofweek is an enum representing the 7 days of the week. This factory allows the enum to be obtained from the int value. The int value follows the ISO-8601 standard, from 1 (Monday) to 7 (Sunday).
If you were to extract the dayofweek from pickup_datetime using BigQuery SQL, the datatype returned would be integer.
Next, we create a model titled "model_1" from the benchmark model and extract out the DayofWeek.
Tip: Make sure to delete the "# TODO 2" when you are about to run the cell.
End of explanation
%%bigquery
# Use `ML.TRAINING_INFO` function which allows you to see information about the training iterations of a model
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.model_1)
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
Explanation: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
End of explanation
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
Explanation: Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
End of explanation
%%bigquery
# Creating the model from benchmark model and extract the hours of day.
CREATE OR REPLACE MODEL
feat_eng.model_2 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS dayofweek,
EXTRACT(HOUR
FROM
pickup_datetime) AS hourofday,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
Explanation: Model 2: EXTRACT hourofday from the pickup_datetime feature
As you recall, pickup_datetime is stored as a TIMESTAMP, where the Timestamp format is retrieved in the standard output format – year-month-day hour:minute:second (e.g. 2016-01-01 23:59:59). Hourofday returns the integer number representing the hour number of the given date.
Hourofday is best thought of as a discrete ordinal variable (and not a categorical feature), as the hours can be ranked (e.g. there is a natural ordering of the values). Hourofday has an added characteristic of being cyclic, since 12am follows 11pm and precedes 1am.
Next, we create a model titled "model_2" and EXTRACT the hourofday from the pickup_datetime feature to improve our model's rmse.
End of explanation
%%bigquery
# Using `CONCAT` function to feature cross the dayofweek and hourofday
CREATE OR REPLACE MODEL
feat_eng.model_3 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
# TODO 3
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
CONCAT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING), CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING)) AS hourofday,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
Explanation: Model 3: Feature cross dayofweek and hourofday using CONCAT
First, let’s allow the model to learn traffic patterns by creating a new feature that combines the time of day and day of week (this is called a feature cross.
Note: BQML by default assumes that numbers are numeric features, and strings are categorical features. We need to convert both the dayofweek and hourofday features to strings because the model (Neural Network) will automatically treat any integer as a numerical value rather than a categorical value. Thus, if not cast as a string, the dayofweek feature will be interpreted as numeric values (e.g. 1,2,3,4,5,6,7) and hour ofday will also be interpreted as numeric values (e.g. the day begins at midnight, 00:00, and the last minute of the day begins at 23:59 and ends at 24:00). As such, there is no way to distinguish the "feature cross" of hourofday and dayofweek "numerically". Casting the dayofweek and hourofday as strings ensures that each element will be treated like a label and will get its own coefficient associated with it.
Create the SQL statement to feature cross the dayofweek and hourofday using the CONCAT function. Name the model "model_3"
Tip: Make sure to delete the "# TODO 3" when you are about to run the cell.
End of explanation
%%bigquery
# Using the `ML.FEATURE_CROSS` clause
CREATE OR REPLACE MODEL feat_eng.model_4
OPTIONS
(model_type='linear_reg',
input_label_cols=['fare_amount'])
AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
#CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING),
#CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS hourofday,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM `feat_eng.feateng_training_data`
Explanation: Model 4: Apply the ML.FEATURE_CROSS clause to categorical features
BigQuery ML now has ML.FEATURE_CROSS, a pre-processing clause that performs a feature cross.
ML.FEATURE_CROSS generates a STRUCT feature with all combinations of crossed categorical features, except for 1-degree items (the original features) and self-crossing items.
Syntax: ML.FEATURE_CROSS(STRUCT(features), degree)
The feature parameter is a categorical features separated by comma to be crossed. The maximum number of input features is 10. An unnamed feature is not allowed in features. Duplicates are not allowed in features.
Degree(optional): The highest degree of all combinations. Degree should be in the range of [1, 4]. Default to 2.
Output: The function outputs a STRUCT of all combinations except for 1-degree items (the original features) and self-crossing items, with field names as concatenation of original feature names and values as the concatenation of the column string values.
Here, we examine the components of ML.Feature_Cross. Note that the next cell contains errors, please correct the cell before continuing or you will get errors.
End of explanation
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_4)
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_4)
Explanation: Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
End of explanation
%%bigquery
# Convert the feature coordinates into a single column of a `spatial` data type
CREATE OR REPLACE MODEL
feat_eng.model_5 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
#CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING),
#CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS hourofday,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
#pickuplon,
#pickuplat,
#dropofflon,
#dropofflat,
# TODO 4
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean
FROM
`feat_eng.feateng_training_data`
Explanation: Sliding down the slope toward a loss minimum (reduced taxi fare)!
Our fourth model above gives us an RMSE of 9.65 for estimating fares. Recall our heuristic benchmark was 8.29. This may be the result of feature crossing. Let's apply more feature engineering techniques to see if we can't get this loss metric lower!
Model 5: Feature cross coordinate features to create a Euclidean feature
Pickup coordinate:
* pickup_longitude AS pickuplon
* pickup_latitude AS pickuplat
Dropoff coordinate:
* #dropoff_longitude AS dropofflon
* #dropoff_latitude AS dropofflat
Coordinate Features:
* The pick-up and drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points.
Recall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85.
The dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points.
We need to convert those coordinates into a single column of a spatial data type. We will use the ST_DISTANCE and the ST_GEOGPOINT functions.
ST_DISTANCE: ST_DISTANCE(geography_1, geography_2). Returns the shortest distance in meters between two non-empty GEOGRAPHYs (e.g. between two spatial objects).
ST_GEOGPOINT: ST_GEOGPOINT(longitude, latitude). Creates a GEOGRAPHY with a single point. ST_GEOGPOINT creates a point from the specified FLOAT64 longitude and latitude parameters and returns that point in a GEOGRAPHY value.
Next we convert the feature coordinates into a single column of a spatial data type. Use the The ST_Distance and the ST.GeogPoint functions.
SAMPLE CODE:
ST_Distance(ST_GeogPoint(value1,value2), ST_GeogPoint(value3, value4)) AS euclidean
Tip: Make sure to delete the "# TODO 4" when you are about to run the cell.
End of explanation
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_5)
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_5)
Explanation: Next, two distinct SQL statements show metrics for model_5.
End of explanation
%%bigquery
# Use feature cross for the pick-up and drop-off locations features.
CREATE OR REPLACE MODEL
feat_eng.model_6 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
#pickuplon,
#pickuplat,
#dropofflon,
#dropofflat,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
CONCAT(ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickuplon,
pickuplat),
0.01)), ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropofflon,
dropofflat),
0.01))) AS pickup_and_dropoff
FROM
`feat_eng.feateng_training_data`
Explanation: Model 6: Feature cross pick-up and drop-off locations features
In this section, we feature cross the pick-up and drop-off locations so that the model can learn pick-up-drop-off pairs that will require tolls.
This step takes the geographic point corresponding to the pickup point and grids to a 0.1-degree-latitude/longitude grid (approximately 8km x 11km in New York—we should experiment with finer resolution grids as well). Then, it concatenates the pickup and dropoff grid points to learn “corrections” beyond the Euclidean distance associated with pairs of pickup and dropoff locations.
Because the lat and lon by themselves don't have meaning, but only in conjunction, it may be useful to treat the fields as a pair instead of just using them as numeric values. However, lat and lon are continuous numbers, so we have to discretize them first. That's what SnapToGrid does.
ST_SNAPTOGRID: ST_SNAPTOGRID(geography_expression, grid_size). Returns the input GEOGRAPHY, where each vertex has been snapped to a longitude/latitude grid. The grid size is determined by the grid_size parameter which is given in degrees.
REMINDER: The ST_GEOGPOINT creates a GEOGRAPHY with a single point. ST_GEOGPOINT creates a point from the specified FLOAT64 longitude and latitude parameters and returns that point in a GEOGRAPHY value. The ST_Distance function returns the minimum distance between two spatial objects. It also returns meters for geographies and SRID units for geometrics.
The following SQL statement is incorrect. Modify the code to feature cross the pick-up and drop-off locations features.
End of explanation
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_6)
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_6)
Explanation: Next, we evaluate model_6.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_6 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
# Use `ML.FEATURE_CROSS` function is a formed by multiplying two or more features.
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
CONCAT(ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickuplon,
pickuplat),
0.01)), ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropofflon,
dropofflat),
0.01))) AS pickup_and_dropoff
FROM
`feat_eng.feateng_training_data`
Explanation: Code Clean Up
Exercise: Clean up the code to see where we are
Remove all the commented statements in the SQL statement. We should now have a total of five input features for our model.
1. fare_amount
2. passengers
3. day_hr
4. euclidean
5. pickup_and_dropoff
End of explanation
%%bigquery
# Use `BUCKETIZE` function
CREATE OR REPLACE MODEL
feat_eng.model_7 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
# Use `ML.FEATURE_CROSS` function is a formed by multiplying two or more features.
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
CONCAT( ML.BUCKETIZE(pickuplon,
GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(pickuplat,
GENERATE_ARRAY(37, 45, 0.01)), ML.BUCKETIZE(dropofflon,
GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(dropofflat,
GENERATE_ARRAY(37, 45, 0.01)) ) AS pickup_and_dropoff
FROM
`feat_eng.feateng_training_data`
Explanation: BQML's Pre-processing functions:
Here are some of the preprocessing functions in BigQuery ML:
* ML.FEATURE_CROSS(STRUCT(features)) does a feature cross of all the combinations
* ML.POLYNOMIAL_EXPAND(STRUCT(features), degree) creates x, x<sup>2</sup>, x<sup>3</sup>, etc.
* ML.BUCKETIZE(f, split_points) where split_points is an array
Model 7: Apply the BUCKETIZE Function
BUCKETIZE
Bucketize is a pre-processing function that creates "buckets" (e.g bins) - e.g. it bucketizes a continuous numerical feature into a string feature with bucket names as the value.
ML.BUCKETIZE(feature, split_points)
feature: A numerical column.
split_points: Array of numerical points to split the continuous values in feature into buckets. With n split points (s1, s2 … sn), there will be n+1 buckets generated.
Output: The function outputs a STRING for each row, which is the bucket name. bucket_name is in the format of bin_<bucket_number>, where bucket_number starts from 1.
Currently, our model uses the ST_GeogPoint function to derive the pickup and dropoff feature. In this lab, we use the BUCKETIZE function to create the pickup and dropoff feature.
Next, apply the BUCKETIZE function to model_7 and run the query.
End of explanation
%%bigquery
# Use `ML.TRAINING_INFO` function which allows you to see information about the training iterations of a model
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.model_7)
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_7)
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_7)
Explanation: Next, we evaluate model_7.
End of explanation
%%bigquery
# Use the `TRANSFORM` clause
CREATE OR REPLACE MODEL
feat_eng.final_model
# TODO 5
TRANSFORM(fare_amount,
#SQRT( (pickuplon-dropofflon)*(pickuplon-dropofflon) + (pickuplat-dropofflat)*(pickuplat-dropofflat) ) AS euclidean,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
# Use `ML.FEATURE_CROSS` function is a formed by multiplying two or more features.
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
CONCAT( ML.BUCKETIZE(pickuplon,
GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(pickuplat,
GENERATE_ARRAY(37, 45, 0.01)), ML.BUCKETIZE(dropofflon,
GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(dropofflat,
GENERATE_ARRAY(37, 45, 0.01)) ) AS pickup_and_dropoff ) OPTIONS(input_label_cols=['fare_amount'],
model_type='linear_reg',
l2_reg=0.1) AS
SELECT
*
FROM
feat_eng.feateng_training_data
Explanation: Final Model: Apply the TRANSFORM clause and L2 Regularization
Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause. BigQuery ML now supports defining data transformations during model creation, which will be automatically applied during prediction and evaluation. This is done through the TRANSFORM clause in the existing CREATE MODEL statement. By using the TRANSFORM clause, user specified transforms during training will be automatically applied during model serving (prediction, evaluation, etc.)
In our case, we are using the TRANSFORM clause to separate out the raw input data from the TRANSFORMED features. The input columns of the TRANSFORM clause is the query_expr (AS SELECT part). The output columns of TRANSFORM from select_list are used in training. These transformed columns are post-processed with standardization for numerics and one-hot encoding for categorical variables by default.
The advantage of encapsulating features in the TRANSFORM clause is the client code doing the PREDICT doesn't change, e.g. our model improvement is transparent to client code. Note that the TRANSFORM clause MUST be placed after the CREATE statement.
L2 Regularization
Sometimes, the training RMSE is quite reasonable, but the evaluation RMSE illustrate more error. Given the severity of the delta between the EVALUATION RMSE and the TRAINING RMSE, it may be an indication of overfitting. When we do feature crosses, we run into the risk of overfitting (for example, when a particular day-hour combo doesn't have enough taxi rides).
Overfitting is a phenomenon that occurs when a machine learning or statistics model is tailored to a particular dataset and is unable to generalize to other datasets. This usually happens in complex models, like deep neural networks. Regularization is a process of introducing additional information in order to prevent overfitting.
Therefore, we will apply L2 Regularization to the final model. As a reminder, a regression model that uses the L1 regularization technique is called Lasso Regression while a regression model that uses the L2 Regularization technique is called Ridge Regression. The key difference between these two is the penalty term. Lasso shrinks the less important feature’s coefficient to zero, thus removing some features altogether. Ridge regression adds “squared magnitude” of coefficient as a penalty term to the loss function.
In other words, L1 limits the size of the coefficients. L1 can yield sparse models (i.e. models with few coefficients); Some coefficients can become zero and eliminated.
L2 regularization adds an L2 penalty equal to the square of the magnitude of coefficients. L2 will not yield sparse models and all coefficients are shrunk by the same factor (none are eliminated).
The regularization terms are ‘constraints’ by which an optimization algorithm must ‘adhere to’ when minimizing the loss function, apart from having to minimize the error between the true y and the predicted ŷ. This in turn reduces model complexity, making our model simpler. A simpler model can reduce the chances of overfitting.
Apply the TRANSFORM clause and L2 Regularization to the final model.
Tip: Make sure to delete the "# TODO 5" when you are about to run the cell.
End of explanation
%%bigquery
# Use `ML.TRAINING_INFO` function which allows you to see information about the training iterations of a model
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.final_model)
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.final_model)
%%bigquery
# Use `ML.EVALUATE` function to evaluate model metrics.
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.final_model)
Explanation: Next, we evaluate the final model.
End of explanation
%%bigquery
# Use `ML.PREDICT` function to predict outcomes using the model.
SELECT
*
FROM
ML.PREDICT(MODEL feat_eng.final_model,
(
SELECT
-73.982683 AS pickuplon,
40.742104 AS pickuplat,
-73.983766 AS dropofflon,
40.755174 AS dropofflat,
3.0 AS passengers,
TIMESTAMP('2019-06-03 04:21:29.769443 UTC') AS pickup_datetime ))
Explanation: Predictive Model
Now that you have evaluated your model, the next step is to use it to predict an outcome. You use your model to predict the taxifare amount.
The ML.PREDICT function is used to predict results using your model: feat_eng.final_model.
Since this is a regression model (predicting a continuous numerical value), the best way to see how it performed is to evaluate the difference between the value predicted by the model and the benchmark score. We can do this with an ML.PREDICT query.
Apply the ML.PREDICT function.
End of explanation
# Visualize the RMSE chart
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
x = ['m4', 'm5', 'm6','m7', 'final']
RMSE = [9.65,5.58,5.90,6.23,5.39]
x_pos = [i for i, _ in enumerate(x)]
plt.bar(x_pos, RMSE, color='green')
plt.xlabel("Model")
plt.ylabel("RMSE")
plt.title("RMSE Model Summary")
plt.xticks(x_pos, x)
plt.show()
Explanation: Lab Summary:
Our ML problem: Develop a model to predict taxi fare based on distance -- from one point to another in New York City.
OPTIONAL Exercise: Create a RMSE summary table.
Create a RMSE summary table:
Solution Table
| Model | Taxi Fare | Description |
|-------------|-----------|---------------------------------------|
| model_4 | 9.65 | --Feature cross categorical features |
| model_5 | 5.58 | --Create a Euclidian feature column |
| model_6 | 5.90 | --Feature cross Geo-location features |
| model_7 | 6.23 | --Apply the TRANSFORM Clause |
| final_model | 5.39 | --Apply L2 Regularization |
RUN the cell to visualize a RMSE bar chart.
End of explanation |
8,448 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step22: Comparing the spectrum of different graphs
Assume we have a graph with $N$ nodes $(0\ldots N-1)$ and undirected, unweighted edges between those notes. Then the Adjacency Matrix $A$ of a graph is defined as the matrix where for every pair of nodes $(i,j)$ that are connected by an edge, $A_{ij}=A_{ji}=1$, and all other nodes are zero. The Laplace Matrix $L$ is similar, except that $L_{ij}=L_{ji}=-1$ and the diagonal elements are chosen such that the sum of coefficient over all the rows is zero. The Laplace matrix defines a quadratic form over the space of functions on the nodes, the latter space being represented as the standard vector space $R^N$. Explicitly, for a function (or, vector) $(x_i)$ we define the Laplacian quadratic form as
$$
L_{xx} = \sum_{\mathrm{edges} (i,j)} (x_i - x_j)^2
$$
ie it is the sum of the squared differences of the function values, summed over all edges of the graph.
It is easy to see that there is (at least) one null-vector, the constant vector $(1,1,\ldots)$. It is easy to see that the dimension of the null-space is equal to the disconnected parts of the graph
Step23: Pure topologies
Solid
Step24: Star-shaped
Step25: Linear (one dimension)
note that for linear nearest neighbours the maximum and (asymptotically in the non-circular case) average number of connections per node is $2$, in which case the eigenvalue is $4$, ie bigger than $\max+1$ as in the solid case. For distance $2$ we have $4$ connections per node, and max eigenvalue $6.2$, and for distance $3$ we have $6$ connections and $8.6$
Step26: Grid (two dimensions)
In the two-dimensional grid the maximal and (asymptotically for non-circular) average number of edges per node is $4$. The eigenvalue is $7.7$ (note that this is higher than in the above case where 2nd nearest neighbours, which also have $4$ connections, had a value of $6.2$)
Step27: Mixed topologies
Linear, with star overlay
Step28: Solid-solid, no interlinks
Step29: Solid-solid, with interlinks
Step30: Solid-solid, with star overlay
Step31: Linear-linear
Step32: Star-star
Step33: Grid-grid
Step34: Complex topologies | Python Code:
import numpy as np
class GraphMatrix:
class to manage and create graph matrices
the constructor takes the dimension of the matrix
version = "2.0"
def __init__(self, dimension):
self.array = np.zeros((dimension,dimension))
self.dim = dimension
def link (self, lfrom, lto, symmetric=True, weight=1.0):
creates a link from index lfrom to index lto
if symmetric == True (default) the also a backlink is created
depending on self.laplace the diagonal is also adusted
self.array[lfrom,lto] -= weight;
self.array[lfrom,lfrom] += weight
if symmetric:
self.array[lto,lfrom] -= weight
self.array[lto,lto] += weight
return self.array
def lapl(self):
returns the Laplace matrix associated to the graph
return self.array
def adj(self):
return the adjacancy matrix associated to the graph
adj = 1 * self.array
for ix in range(0,self.dim):
adj[ix][ix] = 0
return -adj
def d(self):
returns the dimension of the matrix (as known by the object; not checked if modified!)
return self.dim;
def printl(self):
print all the links in the matrix
links = 0
for ix1 in range(0,self.dim):
for ix2 in range(0,self.dim):
if self.array[ix1][ix2] < 0:
print ("%i - %i" % (ix1, ix2))
links += 1
maxlinks = self.dim*(self.dim-1)
print ("%i links (out of %i; ratio %f)" % (links, self.dim*(self.dim-1), (1.0*links)/maxlinks))
from random import random
def starGraph(GM, ix_center=None, ix_start=None, ix_end = None, weight=1, symmetric=True):
creates links in the matrix graph GM that have a star topology
(if GM is an integer the matrix GM is newly created)
ix_center - the node at center of the star (default: 0)
ix_start - the start of the star 'spike' nodes (default: center + 1)
ix_end - the end of the star 'spike' nodes (default: last node in graph)
returns GM
if type(GM) == int: GM = GraphMatrix(GM)
if ix_center == None: ix_center = 0
if ix_start == None: ix_start = ix_center + 1
if ix_end == None: ix_end = GM.d()
for ix in range(ix_start,ix_end):
GM.link(ix_center,ix,symmetric=symmetric, weight=weight)
return GM
def starGraph1(GM, ix_center=None, ix_start=None, ix_end = None, proba=1.0, weight=1, symmetric=True):
creates links in the matrix graph GM that have a star topology
any link however is only created with a probability of `proba`
(if GM is an integer the matrix GM is newly created)
ix_center - the node at center of the star (default: 0)
ix_start - the start of the star 'spike' nodes (default: center + 1)
ix_end - the end of the star 'spike' nodes (default: last node in graph)
returns GM
if type(GM) == int: GM = GraphMatrix(GM)
if ix_center == None: ix_center = 0
if ix_start == None: ix_start = ix_center + 1
if ix_end == None: ix_end = GM.d()
for ix in range(ix_start,ix_end):
if (random()<=proba): GM.link(ix_center,ix,symmetric=symmetric, weight=weight)
return GM
def linearGraph(GM, ix_start=None, ix_end = None, circular=False, weight=1, symmetric=True):
creates links in the matrix graph GM that have a linear (or circular) topology
(if GM is an integer the matrix GM is newly created)
ix_start - the start of the linear nodes (default: 0)
ix_end - the end of the linear nodes (default: last node in graph)
returns GM
if type(GM) == int: GM = GraphMatrix(GM)
if ix_start == None: ix_start = 0
if ix_end == None: ix_end = GM.d()
for ix in range(ix_start,ix_end-1):
GM.link(ix,ix+1,symmetric=symmetric, weight=weight)
if circular: GM.link(ix_end-1,ix_start,symmetric=symmetric, weight=weight)
return GM
def linearGraph1(GM, ix_start=None, ix_end = None, distance=1, weight=1, symmetric=True):
creates links in the matrix graph GM that have a linear topology
(if GM is an integer the matrix GM is newly created)
ix_start - the start of the linear nodes (default: 0)
ix_end - the end of the linear nodes (default: last node in graph)
distance - how far links will be created
returns GM
if type(GM) == int: GM = GraphMatrix(GM)
if ix_start == None: ix_start = 0
if ix_end == None: ix_end = GM.d()
for dist in range(1,distance+1):
for ix in range(ix_start,ix_end-dist):
GM.link(ix,ix+dist,symmetric=symmetric, weight=weight)
return GM
def solidGraph(GM, ix_start=None, ix_end = None, weight=1):
creates links in the matrix graph GM that have a solid topology (everyone linked to everyone)
(if GM is an integer the matrix GM is newly created)
ix_start - the start of the solid nodes (default: 0)
ix_end - the end of the solid nodes (default: last node in graph)
returns GM
if type(GM) == int: GM = GraphMatrix(GM)
if ix_start == None: ix_start = 0
if ix_end == None: ix_end = GM.d()
for ix in range(ix_start,ix_end):
for ix2 in range (ix_start, ix_end):
GM.link(ix,ix2, symmetric=False, weight=weight)
return GM
def solidGraph1(GM, ix_start=None, ix_end = None, proba=1.0, weight=1):
creates links in the matrix graph GM that have a solid topology (everyone linked to everyone)
any link however is only created with a probability of `proba`
(if GM is an integer the matrix GM is newly created)
ix_start - the start of the solid nodes (default: 0)
ix_end - the end of the solid nodes (default: last node in graph)
returns GM
if type(GM) == int: GM = GraphMatrix(GM)
if ix_start == None: ix_start = 0
if ix_end == None: ix_end = GM.d()
for ix in range(ix_start,ix_end):
for ix2 in range (ix+1, ix_end):
if (random()<=proba): GM.link(ix,ix2, weight=weight)
return GM
def gridGraph(GM=None, d1=None, d2=None, circular1=False, circular2=False, ix_start=None, weight=1):
creates links in the matrix graph GM that have a 2D-grid topology (everyone linked to next neighbor)
(if GM is None then matrix GM is newly created based on d1, d2, and ix_start)
d1,d2 are the number of nodes in the two dimensions of the grid
circular1, circular2 are indicators whether grid are wrapped around at the respective boundary
ix_start - the start of the solid nodes (default: 0)
returns GM
if d1 == None: d1=1
if d2 == None: d2=1
if ix_start == None: ix_start=0
if GM == None: GM = GraphMatrix(ix_start + d1*d2)
for ix2 in range(0,d2):
for ix1 in range(0,d1):
ix = ix_start + ix2*d1 + ix1
if ix1 < d1-1: GM.link(ix,ix+1, symmetric=True, weight=weight)
else:
if circular1: GM.link(ix,ix_start+ix2*d1, symmetric=True, weight=weight)
if ix2 < d2-1: GM.link(ix,ix+d1, symmetric=True, weight=weight)
else:
if circular2: GM.link(ix,ix_start+ix1, symmetric=True, weight=weight)
return GM
import numpy as np
import matplotlib.pyplot as plt
class EigenSystem:
collection of tools to deal with eigensystems (eigenvectors and assoviated values)
version = "1.0"
def __init__ (self, matrix, sort=True):
evals, evecs - the eigensystem
evecs_in_cols - true iff the vectors are in the columns of the matrix (NumPy default)
sort - if True, the eigensystem will be sorted
if not type(matrix) == numpy.ndarray: matrix = matrix.array
evals, evecs = np.linalg.eig(matrix)
evecs = evecs.T
if sort: evals, evecs = self._sort(evals, evecs, evecs_in_cols=False)
self._evals = evals
self._evecs = evecs # we like the vectors in the rows
return
def evals(self):
accessor: eigenvalues
return self._evals
def evecs(self, evecs_in_cols=False):
accessor: eigenvectors
if evecs_in_cols: return self._evecs.T
else: return self._evecs
def evec(self, ix, d=None):
returns one eigenvector
if d == (d1,d2) the coeffs are put into a matrix d1, d2
if d == None: return self._evecs[ix]
vec = self._evecs[ix]
d1 = d[0];
d2 = d[1];
ixix = 0;
result = np.zeros((d2,d1)) # (cols, rows)
for ix2 in range(0,d2):
for ix1 in range(0,d1):
result[ix2][ix1] = vec[ixix] # [col][row]
ixix += 1
return result
def d(self):
return len(self._evals)
def _sort(self, evals=None, evecs=None, reverse=True, evecs_in_cols=True):
takes the eigenvalues and eigenvectors as returned by the numpy.eigen function
and jointly sort them by size (default: highest first, unless revers is False)
if evals == None: evals = self.evals
if evecs == None: evecs = self.evecs
if evecs_in_cols: evecs = evecs.T
evals1 = list((i,x) for x,i in enumerate(evals))
evals1.sort(reverse=reverse)
evalss = array(list(ev for ev,i in evals1))
evecss = array(list(evecs[i] for ev,i in evals1))
return evalss, evecss
def plot1 (self, therange, vals=False, title=None):
plot eigenvectors on a graph (legend is eigenvalue)
if vals:
if title==None: plt.title("Eigenvalues")
else: plt.title(title)
plt.plot(self._evals, 'o-')
plt.show()
for i in therange:
plt.plot(self._evecs[i],'o-', label="%.2f (#%i)" % (round(self._evals[i],2),i))
if len(therange)>0:
if title==None: plt.title("Eigenvectors")
else: plt.title(title)
plt.legend()
plt.show()
def plot2 (self, px=0, py=1):
plot a projection of the eigenvectors onto a plane
px,py are the dimension numbers for the projection
x = []
y = []
for v in self._evecs:
x.append(v[px])
y.append(v[py])
plt.plot(x,y,'+')
plt.title("Graph of Eigenvectors")
plt.show
def plot3 (self, ix, d1, d2):
assume eigenvector corresponds to matrix d1 x d2
plot each of the segments as one curve
vec = self.evec(ix, (d1,d2))
for i,v in enumerate(vec):
plt.plot(v, 'o-', label = "%i" % i)
plt.title("Eigenvector #%i (%f)" % (ix, self._evals[ix]))
plt.legend()
plt.show()
return vec
Explanation: Comparing the spectrum of different graphs
Assume we have a graph with $N$ nodes $(0\ldots N-1)$ and undirected, unweighted edges between those notes. Then the Adjacency Matrix $A$ of a graph is defined as the matrix where for every pair of nodes $(i,j)$ that are connected by an edge, $A_{ij}=A_{ji}=1$, and all other nodes are zero. The Laplace Matrix $L$ is similar, except that $L_{ij}=L_{ji}=-1$ and the diagonal elements are chosen such that the sum of coefficient over all the rows is zero. The Laplace matrix defines a quadratic form over the space of functions on the nodes, the latter space being represented as the standard vector space $R^N$. Explicitly, for a function (or, vector) $(x_i)$ we define the Laplacian quadratic form as
$$
L_{xx} = \sum_{\mathrm{edges} (i,j)} (x_i - x_j)^2
$$
ie it is the sum of the squared differences of the function values, summed over all edges of the graph.
It is easy to see that there is (at least) one null-vector, the constant vector $(1,1,\ldots)$. It is easy to see that the dimension of the null-space is equal to the disconnected parts of the graph: any vector that is unity on a part and zero elsewhere is a null vector.
Below we will look at a number of different fundamental graph topologies:
solid topologies where everyone is linked with everyone else
next neighbour topologies where nodes are arranged according to a geometric object (a line or a circle in one dimension; a flat surface, or the surface of a cylinder or a torus in two dimension) and only nearest neigbours are linked (or possibly 2nd, 3rd neighbours etc also, but with the connection width significantly smaller than the width of the grid
star topologies where one node is linked to an entire group of peripheral nodes that are not connected amongst themselves
We also look at composite topologies where a number of the fundamental topologies are combined in a graph. For example, we can have several segments of the graph that within themselves are solidly connected, but that are disjoint, or only a few next neighbour links, are that are peripheral nodes in a star topology. We also look at random topologies where a certain percentage of the links is dropped.
The key findings from the analysis we ran below are the following:
every connected graph has exactly one zero eigenvalue
in a connected solid graph, all eigenvalues (apart from the zero one) are the same; the value is the number of nodes in the graph, which is equal to one plus the maximal (and average, in this case) number of edges per node
in a connected star graph, there is one very large eigenvalue, and all other eigenvalues (apart from the zero one) are the same and comparatively small; the largest eigenvalue is equal to the number of nodes in the graph, which is equal to one plus the number of edges in the central node; the small eigenvalues are equal to unity
in a next neighbour the is a continous increase in eigenvalues without apparent gaps; usually all eigenvalues are different, unless there are symmetries (eg, rotation for circle and torus based grids); the further reaching the connections the more the eigenvalue "line" is curved, slowly approaching the solid graph line; the size of the largest eigenvalue is not clear: it tends to be bigger than the maximal number of edges on a node plus one, and it also depends on the topology (eg a second-neighbor linear graph has a smaller eigenvalue than a grid-graph, $7.6$ vs $6.2$, even though both graphs have a maximum and asymptotically average number of edges of $4$)
in a probabilistic (solid) graph the spectrum becomes similar to that of a next neigbour graph; the average eigenvalue seems to be around the average number of edges per node, and conjecture is that the maximum eigenvalue is the maximum number of connections per node plus one
a combination graph usually looks like the sum of its parts. For example, to disjoint solid graphs will have a piecewise flat spectrum, or the combination of a linear graph and star graphs will have a gently sloping spectrum with a spike for each of the stars
Library functions
see Notebook Library-General (those functions should really be in a module, but then this notebook would not be self-sufficient)
End of explanation
gm = solidGraph(30)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid 30 (eigenvalues)")
gm = solidGraph1(30, proba=0.9)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid-90% 30 (eigenvalues)")
gm = solidGraph1(30, proba=0.5)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid-50% 30 (eigenvalues)")
gm = solidGraph1(30, proba=0.1)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid-10% 30 (eigenvalues)")
Explanation: Pure topologies
Solid
End of explanation
gm = starGraph(30)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="star 30 (eigenvalues)")
Explanation: Star-shaped
End of explanation
gm = linearGraph(50, circular=False)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="linear 50, nearest only, non circular (eigenvalues)")
gm = linearGraph(50, circular=True)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="linear 50, nearest only, circular (eigenvalues)")
gm = linearGraph1(50, distance=2)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="linear 50, dist=2 (eigenvalues)")
gm = linearGraph1(50, distance=3)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="linear 50, dist=3 (eigenvalues)")
gm = linearGraph1(50, distance=5)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="linear 50, dist=5 (eigenvalues)")
Explanation: Linear (one dimension)
note that for linear nearest neighbours the maximum and (asymptotically in the non-circular case) average number of connections per node is $2$, in which case the eigenvalue is $4$, ie bigger than $\max+1$ as in the solid case. For distance $2$ we have $4$ connections per node, and max eigenvalue $6.2$, and for distance $3$ we have $6$ connections and $8.6$
End of explanation
gm = gridGraph(d1=9, d2=7, circular1=False, circular2=False)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="grid 9x7, non circular (eigenvalues)")
gm = gridGraph(d1=9, d2=7, circular1=True, circular2=False)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="grid 9x7, semi circular (eigenvalues)")
gm = gridGraph(d1=9, d2=7, circular1=True, circular2=True)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="grid 9x7, circular (eigenvalues)")
Explanation: Grid (two dimensions)
In the two-dimensional grid the maximal and (asymptotically for non-circular) average number of edges per node is $4$. The eigenvalue is $7.7$ (note that this is higher than in the above case where 2nd nearest neighbours, which also have $4$ connections, had a value of $6.2$)
End of explanation
gm = GraphMatrix(51)
linearGraph(gm, ix_start=0, ix_end=50)
starGraph(gm, ix_center=50, ix_start=0, ix_end=50)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="linear 50; star overlay (eigenvalues)")
gm = GraphMatrix(51)
linearGraph(gm, ix_start=0, ix_end=50)
starGraph(gm, ix_center=50, ix_start=0, ix_end=10)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="linear 50; partial star overlay 10 (eigenvalues)")
Explanation: Mixed topologies
Linear, with star overlay
End of explanation
gm = GraphMatrix(50)
solidGraph(gm, ix_start=0, ix_end=25)
solidGraph(gm, ix_start=25, ix_end=50)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid 25 + 25 (eigenvalues)")
gm = GraphMatrix(50)
solidGraph(gm, ix_start=0, ix_end=10)
solidGraph(gm, ix_start=10, ix_end=50)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid 10 + 40 (eigenvalues)")
gm = GraphMatrix(60)
solidGraph(gm, ix_start=0, ix_end=10)
solidGraph(gm, ix_start=10, ix_end=30)
solidGraph(gm, ix_start=30, ix_end=60)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid 10 + 20 + 30 (eigenvalues)")
Explanation: Solid-solid, no interlinks
End of explanation
gm = GraphMatrix(50)
solidGraph(gm, ix_start=0, ix_end=25)
solidGraph(gm, ix_start=25, ix_end=50)
for ix in range(0,1):
gm.link(ix, ix+25)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid 25 + 25; 1 interlink (eigenvalues)")
gm = GraphMatrix(50)
solidGraph(gm, ix_start=0, ix_end=25)
solidGraph(gm, ix_start=25, ix_end=50)
for ix in range(0,5):
gm.link(ix, ix+25)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid 25 + 25; 5 interlinks (eigenvalues)")
gm = GraphMatrix(50)
solidGraph(gm, ix_start=0, ix_end=25)
solidGraph(gm, ix_start=25, ix_end=50)
for ix in range(0,10):
gm.link(ix, ix+25)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid 25 + 25; 10 interlinks (eigenvalues)")
Explanation: Solid-solid, with interlinks
End of explanation
gm = GraphMatrix(51)
solidGraph(gm, ix_start=0, ix_end=25)
solidGraph(gm, ix_start=25, ix_end=50)
starGraph(gm, ix_center=50, ix_start=0, ix_end=50)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid 25 + 25; star overlay (eigenvalues)")
gm = GraphMatrix(51)
solidGraph(gm, ix_start=0, ix_end=20)
solidGraph(gm, ix_start=20, ix_end=50)
starGraph(gm, ix_center=50, ix_start=0, ix_end=50)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid 20 + 30; star overlay (eigenvalues)")
Explanation: Solid-solid, with star overlay
End of explanation
gm = GraphMatrix(50)
linearGraph(gm, ix_start=0, ix_end=25)
linearGraph(gm, ix_start=25, ix_end=50)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="linear 25 + 25 (eigenvalues)")
gm = GraphMatrix(50)
linearGraph(gm, ix_start=0, ix_end=10)
linearGraph(gm, ix_start=10, ix_end=50)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="linear 10 + 40 (eigenvalues)")
gm = GraphMatrix(50)
linearGraph1(gm, ix_start=0, ix_end=25, distance=5)
linearGraph1(gm, ix_start=25, ix_end=50, distance=5)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="linear-5 25 + 25 (eigenvalues)")
gm = GraphMatrix(51)
linearGraph1(gm, ix_start=0, ix_end=25, distance=5)
linearGraph1(gm, ix_start=25, ix_end=50, distance=5)
starGraph(gm, ix_center=50, ix_start=0, ix_end=50)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="linear-5 25 + 25; star overlay (eigenvalues)")
Explanation: Linear-linear
End of explanation
gm = GraphMatrix(50)
starGraph(gm, ix_center=0, ix_start=1, ix_end=25)
starGraph(gm, ix_center=25, ix_start=26, ix_end=50)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="start 25 + 25 (eigenvalues)")
gm = GraphMatrix(50)
starGraph(gm, ix_center=0, ix_start=1, ix_end=10)
starGraph(gm, ix_center=10, ix_start=11, ix_end=50)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="star 10 + 40 (eigenvalues)")
Explanation: Star-star
End of explanation
gm = GraphMatrix(63+63)
gm = gridGraph(GM=gm, d1=9, d2=7, ix_start=0, circular1=False, circular2=False)
gm = gridGraph(GM=gm, d1=9, d2=7, ix_start=63, circular1=False, circular2=False)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="grid 9x7 + 9x7 (eigenvalues)")
Explanation: Grid-grid
End of explanation
N = 5
w1 = 20
p = .1
ps = 0.5
gm = GraphMatrix(w1*N+2)
for i in range(0,N):
solidGraph1(gm, ix_start=i*w1, ix_end=(i+1)*w1, proba=p)
starGraph1(gm, ix_center=w1*N, proba=ps, ix_start=0, ix_end=w1*N)
starGraph1(gm, ix_center=w1*N+1, proba=ps, ix_start=50, ix_end=w1*N)
es = EigenSystem(gm)
print("eigenvalues %2.1f, %2.1f, ... , %2.1f" % (es.evals()[0], es.evals()[1], es.evals()[-1]))
es.plot1((), vals=True, title="solid-%.0f%% %i x %i; star overlay x2 (eigenvalues)" % (p*100,N,w1))
Explanation: Complex topologies
End of explanation |
8,449 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SQL
Accessing data stored in databases is a routine exercise. I demonstrate a few helpful methods in the Jupyter Notebook.
Step1: SQL
CREATE TABLE presidents (first_name, last_name, year_of_birth);
INSERT INTO presidents VALUES ('George', 'Washington', 1732);
INSERT INTO presidents VALUES ('John', 'Adams', 1735);
INSERT INTO presidents VALUES ('Thomas', 'Jefferson', 1743);
INSERT INTO presidents VALUES ('James', 'Madison', 1751);
INSERT INTO presidents VALUES ('James', 'Monroe', 1758);
INSERT INTO presidents VALUES ('Zachary', 'Taylor', 1784);
INSERT INTO presidents VALUES ('Abraham', 'Lincoln', 1809);
INSERT INTO presidents VALUES ('Theodore', 'Roosevelt', 1858);
INSERT INTO presidents VALUES ('Richard', 'Nixon', 1913);
INSERT INTO presidents VALUES ('Barack', 'Obama', 1961);
Step2: Through pandas directly
Step4: SQL
SELECT first_name,
last_name,
year_of_birth
FROM presidents
WHERE year_of_birth > 1800; | Python Code:
!hive create_features.sql
import warnings
warnings.filterwarnings('ignore')
!conda install -c conda-forge ipython-sql -y
%load_ext sql
%config SqlMagic.autopandas=True
import pandas as pd
import sqlite3
Explanation: SQL
Accessing data stored in databases is a routine exercise. I demonstrate a few helpful methods in the Jupyter Notebook.
End of explanation
%%sql sqlite://
CREATE TABLE presidents (first_name, last_name, year_of_birth);
INSERT INTO presidents VALUES ('George', 'Washington', 1732);
INSERT INTO presidents VALUES ('John', 'Adams', 1735);
INSERT INTO presidents VALUES ('Thomas', 'Jefferson', 1743);
INSERT INTO presidents VALUES ('James', 'Madison', 1751);
INSERT INTO presidents VALUES ('James', 'Monroe', 1758);
INSERT INTO presidents VALUES ('Zachary', 'Taylor', 1784);
INSERT INTO presidents VALUES ('Abraham', 'Lincoln', 1809);
INSERT INTO presidents VALUES ('Theodore', 'Roosevelt', 1858);
INSERT INTO presidents VALUES ('Richard', 'Nixon', 1913);
INSERT INTO presidents VALUES ('Barack', 'Obama', 1961);
later_presidents = %sql SELECT * FROM presidents WHERE year_of_birth > 1825
later_presidents
type(later_presidents)
con = sqlite3.connect("presidents.sqlite")
later_presidents.to_sql("presidents", con, if_exists='replace')
Explanation: SQL
CREATE TABLE presidents (first_name, last_name, year_of_birth);
INSERT INTO presidents VALUES ('George', 'Washington', 1732);
INSERT INTO presidents VALUES ('John', 'Adams', 1735);
INSERT INTO presidents VALUES ('Thomas', 'Jefferson', 1743);
INSERT INTO presidents VALUES ('James', 'Madison', 1751);
INSERT INTO presidents VALUES ('James', 'Monroe', 1758);
INSERT INTO presidents VALUES ('Zachary', 'Taylor', 1784);
INSERT INTO presidents VALUES ('Abraham', 'Lincoln', 1809);
INSERT INTO presidents VALUES ('Theodore', 'Roosevelt', 1858);
INSERT INTO presidents VALUES ('Richard', 'Nixon', 1913);
INSERT INTO presidents VALUES ('Barack', 'Obama', 1961);
End of explanation
%%sql
SELECT first_name,
last_name,
year_of_birth
FROM presidents
WHERE year_of_birth > 1800;
Explanation: Through pandas directly
End of explanation
con = sqlite3.connect("presidents.sqlite")
cur = con.cursor()
new_dataframe = pd.read_sql(SELECT first_name, last_name, year_of_birth
FROM presidents
WHERE year_of_birth > 1800
,
con=con)
con.close()
new_dataframe
type(new_dataframe)
Explanation: SQL
SELECT first_name,
last_name,
year_of_birth
FROM presidents
WHERE year_of_birth > 1800;
End of explanation |
8,450 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 14
Step1: Python also has a module called types, which has the definitions of the basic types of the interpreter.
Example
Step2: Through introspection, it is possible to determine the fields of a database table, for example.
Inspect
The module inspect provides a set of high-level functions that allow for introspection to investigate types, collection items, classes, functions, source code and the runtime stack of the interpreter.
Example
Step3: The functions that work with the stack of the interpreter should be used with caution because it is possible to create cyclic references (a variable that points to the stack item that has the variable itself). The existence of references to stack items slows the destruction of the items by the garbage collector of the interpreter. | Python Code:
trospection or reflection is the ability of software to identify and report their own internal structures, such as types, variabl# Getting some information
# about global objects in the program
from types import ModuleType
def info(n_obj):
# Create a referênce to the object
obj = globals()[n_obj]
# Show object information
print ('Name of object:', n_obj)
print ('Identifier:', id(obj))
print ('Typo:', type(obj))
print ('Representation:', repr(obj))
# If it is a module
if isinstance(obj, ModuleType):
print( 'itens:')
for item in dir(obj):
print (item)
print
# Showing information
for n_obj in dir()[:10]: # The slice [:10] is used just to limit objects
info(n_obj)
Explanation: Chapter 14: Introspection
Introspection or reflection is the ability of software to identify and report their own internal structures, such as types, variable scope, methods and attributes.
Native interpreter functions for introspection:
<table>
<tr>
<th>Function</th>
<th>Returns</th>
</tr>
<tr>
<td><code>type(object)</code></td>
<td>The typo (class) of the object</td>
</tr>
<tr>
<td><code>id(object)</code></td>
<td>object identifier</td>
</tr>
<tr>
<td><code>locals()</code></td>
<td>local variables dictionary</td>
</tr>
<tr>
<td><code>globals()</code></td>
<td>global variables dictionary</td>
</tr>
<tr>
<td><code>vars(object)</code></td>
<td>object symbols dictionary</td>
</tr>
<tr>
<td><code>len(object)</code></td>
<td>size of an object</td>
</tr>
<tr>
<td><code>dir(object)</code></td>
<td>A list of object structures</td>
</tr>
<tr>
<td><code>help(object)</code></td>
<td>Object doc strings</td>
</tr>
<tr>
<td><code>repr(object)</code></td>
<td>Object representation</td>
</tr>
<tr>
<td><code>isinstance(object, class)</code></td>
<td>True if object is derived from class</td>
</tr>
<tr>
<td><code>issubclass(subclass, class)</code></td>
<td>True if object inherits the class</td>
</tr>
</table>
The object identifier is a unique number that is used by the interpreter for identifying the objects internally.
Example:
End of explanation
import types
s = ''
if isinstance(s, types.StringType):
print 's is a string.'
Explanation: Python also has a module called types, which has the definitions of the basic types of the interpreter.
Example:
End of explanation
import os.path
# inspect: "friendly" introspection module
import inspect
print 'Object:', inspect.getmodule(os.path)
print 'Class?', inspect.isclass(str)
# Lists all functions that exist in "os.path"
print 'Member:',
for name, struct in inspect.getmembers(os.path):
if inspect.isfunction(struct):
print name,
Explanation: Through introspection, it is possible to determine the fields of a database table, for example.
Inspect
The module inspect provides a set of high-level functions that allow for introspection to investigate types, collection items, classes, functions, source code and the runtime stack of the interpreter.
Example:
End of explanation
import inspect
def myself():
return inspect.stack()[1][3]
def parent_function():
return inspect.stack()[2][3]
def power(expo):
print("I am at {name}, {parent}".format(name=myself(), parent=parent_function()))
def inner(num):
print("I am at {name}, {parent}".format(name=myself(), parent=parent_function()))
return num**expo
return inner
def test_power(a, b):
p = power(a)
p(b)
d = power(10)
d(10)
test_power(10, 5)
Explanation: The functions that work with the stack of the interpreter should be used with caution because it is possible to create cyclic references (a variable that points to the stack item that has the variable itself). The existence of references to stack items slows the destruction of the items by the garbage collector of the interpreter.
End of explanation |
8,451 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fire up graphlab create
Step1: Load some house value vs. crime rate data
Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
Step2: Exploring the data
The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
Step3: Fit the regression model using crime as the feature
Step4: Let's see what our fit looks like
Matplotlib is a Python plotting library that is also useful for plotting. You can install it with
Step5: Above
Step6: Refit our simple regression model on this modified dataset
Step7: Look at the fit
Step8: Compare coefficients for full-data fit versus no-Center-City fit
Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
Step9: Above
Step10: Do the coefficients change much? | Python Code:
import sys
sys.path.append('C:\Anaconda2\envs\dato-env\Lib\site-packages')
import graphlab
Explanation: Fire up graphlab create
End of explanation
sales = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv/')
sales
Explanation: Load some house value vs. crime rate data
Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
End of explanation
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
Explanation: Exploring the data
The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
End of explanation
crime_model = graphlab.linear_regression.create(sales, target='HousePrice', features=['CrimeRate'],validation_set=None,verbose=False)
Explanation: Fit the regression model using crime as the feature
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(sales['CrimeRate'],sales['HousePrice'],'.',
sales['CrimeRate'],crime_model.predict(sales),'-')
Explanation: Let's see what our fit looks like
Matplotlib is a Python plotting library that is also useful for plotting. You can install it with:
'pip install matplotlib'
End of explanation
sales_noCC = sales[sales['MilesPhila'] != 0.0]
sales_noCC.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
Explanation: Above: blue dots are original data, green line is the fit from the simple regression.
Remove Center City and redo the analysis
Center City is the one observation with an extremely high crime rate, yet house prices are not very low. This point does not follow the trend of the rest of the data very well. A question is how much including Center City is influencing our fit on the other datapoints. Let's remove this datapoint and see what happens.
End of explanation
crime_model_noCC = graphlab.linear_regression.create(sales_noCC, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
Explanation: Refit our simple regression model on this modified dataset:
End of explanation
plt.plot(sales_noCC['CrimeRate'],sales_noCC['HousePrice'],'.',
sales_noCC['CrimeRate'],crime_model.predict(sales_noCC),'-')
Explanation: Look at the fit:
End of explanation
crime_model.get('coefficients')
crime_model_noCC.get('coefficients')
Explanation: Compare coefficients for full-data fit versus no-Center-City fit
Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
End of explanation
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000]
crime_model_nohighend = graphlab.linear_regression.create(sales_nohighend, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
Explanation: Above: We see that for the "no Center City" version, per unit increase in crime, the predicted decrease in house prices is 2,287. In contrast, for the original dataset, the drop is only 576 per unit increase in crime. This is significantly different!
High leverage points:
Center City is said to be a "high leverage" point because it is at an extreme x value where there are not other observations. As a result, recalling the closed-form solution for simple regression, this point has the potential to dramatically change the least squares line since the center of x mass is heavily influenced by this one point and the least squares line will try to fit close to that outlying (in x) point. If a high leverage point follows the trend of the other data, this might not have much effect. On the other hand, if this point somehow differs, it can be strongly influential in the resulting fit.
Influential observations:
An influential observation is one where the removal of the point significantly changes the fit. As discussed above, high leverage points are good candidates for being influential observations, but need not be. Other observations that are not leverage points can also be influential observations (e.g., strongly outlying in y even if x is a typical value).
Remove high-value outlier neighborhoods and redo analysis
Based on the discussion above, a question is whether the outlying high-value towns are strongly influencing the fit. Let's remove them and see what happens.
End of explanation
crime_model_noCC.get('coefficients')
crime_model_nohighend.get('coefficients')
Explanation: Do the coefficients change much?
End of explanation |
8,452 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SA360 Report
Move SA360 report to BigQuery.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter SA360 Report Recipe Parameters
Fill in the report definition and destination table.
Wait for BigQuery->->-> to be created.
Or give these intructions to the client.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute SA360 Report
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: SA360 Report
Move SA360 report to BigQuery.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_sa':'service', # Credentials used for writing data.
'auth_bq':'service', # Authorization used for writing data.
'dataset':'', # Existing BigQuery dataset.
'table':'', # Table to create from this report.
'report':{}, # Body part of report request API call.
'is_incremental_load':False, # Clear data in destination table during this report's time period, then append report data to destination table.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter SA360 Report Recipe Parameters
Fill in the report definition and destination table.
Wait for BigQuery->->-> to be created.
Or give these intructions to the client.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'sa':{
'description':'Create a dataset for bigquery tables.',
'auth':{'field':{'name':'auth_sa','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'body':{'field':{'name':'report','kind':'json','order':4,'default':{},'description':'Body part of report request API call.'}},
'out':{
'bigquery':{
'auth':{'field':{'name':'auth_bq','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'dataset':{'field':{'name':'dataset','kind':'string','order':2,'default':'','description':'Existing BigQuery dataset.'}},
'table':{'field':{'name':'table','kind':'string','order':3,'default':'','description':'Table to create from this report.'}},
'is_incremental_load':{'field':{'name':'is_incremental_load','kind':'boolean','order':4,'default':False,'description':"Clear data in destination table during this report's time period, then append report data to destination table."}},
'header':True
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute SA360 Report
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
8,453 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preprocessing train dataset
Divide the train folder into two folders mytrain_ox and myvalid_ox
Step1: Visualize the size of the original train dataset.
Step2: Shuffle and split the train filenames
Step3: Visualize the size of the processed train dataset
Step4: Create symbolic link of images | Python Code:
from sklearn.model_selection import train_test_split
import seaborn as sns
import os
import shutil
import pandas as pd
%matplotlib inline
df = pd.read_csv('list.txt', sep=' ')
df.ix[2000:2005]
Explanation: Preprocessing train dataset
Divide the train folder into two folders mytrain_ox and myvalid_ox
End of explanation
train_cat = df[df['SPECIES'] == 1]
train_dog = df[df['SPECIES'] == 2]
x = ['cat', 'dog']
y = [len(train_cat), len(train_dog)]
ax = sns.barplot(x=x, y=y)
Explanation: Visualize the size of the original train dataset.
End of explanation
mytrain, myvalid = train_test_split(df, test_size=0.1)
print len(mytrain), len(myvalid)
Explanation: Shuffle and split the train filenames
End of explanation
mytrain_cat = mytrain[mytrain['SPECIES'] == 1]
mytrain_dog = mytrain[mytrain['SPECIES'] == 2]
myvalid_cat = myvalid[myvalid['SPECIES'] == 1]
myvalid_dog = myvalid[myvalid['SPECIES'] == 2]
x = ['mytrain_cat', 'mytrain_dog', 'myvalid_cat', 'myvalid_dog']
y = [len(mytrain_cat), len(mytrain_dog), len(myvalid_cat), len(myvalid_dog)]
ax = sns.barplot(x=x, y=y)
Explanation: Visualize the size of the processed train dataset
End of explanation
def remove_and_create_class(dirname):
if os.path.exists(dirname):
shutil.rmtree(dirname)
os.mkdir(dirname)
os.mkdir(dirname+'/cat')
os.mkdir(dirname+'/dog')
remove_and_create_class('mytrain_ox')
remove_and_create_class('myvalid_ox')
for filename in mytrain_cat['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'mytrain_ox/cat/'+filename+'.jpg')
for filename in mytrain_dog['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'mytrain_ox/dog/'+filename+'.jpg')
for filename in myvalid_cat['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'myvalid_ox/cat/'+filename+'.jpg')
for filename in myvalid_dog['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'myvalid_ox/dog/'+filename+'.jpg')
Explanation: Create symbolic link of images
End of explanation |
8,454 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GTEx MatrixTables
To create MatrixTables containing all variant-gene associations tested in each tissue (including non-significant associations) for GTEx v8.
There are two MatrixTables, one is for the eQTL tissue-specific all SNP gene associations data and the other is for the sQTL tissue-specific all SNP gene associations data.
Hail Tables for each tissue were already created previously from the data here. For eQTL each table is ~7 GiB, and for sQTL each table is ~40 GiB or so. A README describing the fields in the GTEx QTL datasets is available here.
Each MatrixTable has rows keyed by ["locus", "alleles"], and columns keyed by ["tissue"].
Step1: First we can grab a list of the GTEx tissue names
Step2: Take a peek at the tissue names we get to make sure they're what we expect
Step3: We can start with the process for the eQTL tables since they are smaller and a bit easier to work with. There are pretty much three steps here
- Generate individual MatrixTables from the existing Hail Tables for each tissue type, there are 49 tissue types in total.
- Perform a multi-way union cols (MWUC) on these 49 MatrixTables to create a single MatrixTable where there is a column for each tissue.
- After the MWUC the resulting MatrixTable has pretty imbalanced partitions (some are KiBs, others are GiBs) so we have to repartition the unioned MatrixTable.
eQTL tissue-specific all SNP gene associations
Generate individual MatrixTables from the existing Hail Tables for each tissue type (49 total).
Write output to gs
Step6: To ensure that everything is joined correctly later on, we add both the _gene_id and tss_distance fields to the table keys here.
After the unioned MatrixTable is created we will re-key the rows to just be ["locus", "alleles"], and rename the fields above back to gene_id and tss_distance (they will now be row fields).
Perform multi-way union cols (MWUC) on MatrixTables generated above
The function below was used to take a list of MatrixTables and a list with the column key fields and output a single MatrixTable with the columns unioned.
Step7: Now we can read in each individual MatrixTable and add it to the list we will pass to multi_way_union_cols.
Step8: Repartition unioned MatrixTable
After the MWUC the resulting MatrixTable has pretty imbalanced partitions (some are KiBs, others are GiBs) so we want to repartition the unioned MatrixTable.
First we can re-key the rows of our MatrixTable
Step9: I tried reading in the MatrixTable with _n_partitions=1000 to see how our partitions would look, but we still had a few that were much larger than the rest. So after this I ended up doing using repartition with a full shuffle, and it balanced things out.
Step10: And now we have a single MatrixTable for the GTEx eQTL data. | Python Code:
import subprocess
import hail as hl
hl.init()
Explanation: GTEx MatrixTables
To create MatrixTables containing all variant-gene associations tested in each tissue (including non-significant associations) for GTEx v8.
There are two MatrixTables, one is for the eQTL tissue-specific all SNP gene associations data and the other is for the sQTL tissue-specific all SNP gene associations data.
Hail Tables for each tissue were already created previously from the data here. For eQTL each table is ~7 GiB, and for sQTL each table is ~40 GiB or so. A README describing the fields in the GTEx QTL datasets is available here.
Each MatrixTable has rows keyed by ["locus", "alleles"], and columns keyed by ["tissue"].
End of explanation
list_tissues = subprocess.run(["gsutil", "-u", "broad-ctsa", "ls",
"gs://hail-datasets-tmp/GTEx/GTEx_Analysis_v8_QTLs/GTEx_Analysis_v8_eQTL_all_associations"],
stdout=subprocess.PIPE)
tissue_files = list_tissues.stdout.decode("utf-8").split()
tissue_names = [x.split("/")[-1].split(".")[0] for x in tissue_files]
Explanation: First we can grab a list of the GTEx tissue names:
End of explanation
tissue_names[0:5]
Explanation: Take a peek at the tissue names we get to make sure they're what we expect:
End of explanation
for tissue_name in tissue_names:
print(f"eQTL: {tissue_name}")
ht = hl.read_table(f"gs://hail-datasets-us/GTEx_eQTL_allpairs_{tissue_name}_v8_GRCh38.ht", _n_partitions=64)
ht = ht.annotate(_gene_id = ht.gene_id, _tss_distance = ht.tss_distance)
ht = ht.drop("variant_id", "metadata")
ht = ht.key_by("locus", "alleles", "_gene_id", "_tss_distance")
ht = ht.annotate(**{tissue_name: ht.row_value.drop("gene_id", "tss_distance")})
ht = ht.select(tissue_name)
mt = ht.to_matrix_table_row_major(columns=[tissue_name], col_field_name="tissue")
mt = mt.checkpoint(
f"gs://hail-datasets-tmp/GTEx/eQTL_MatrixTables/GTEx_eQTL_all_snp_gene_associations_{tissue_name}_v8_GRCh38.mt",
overwrite=False,
_read_if_exists=True
)
Explanation: We can start with the process for the eQTL tables since they are smaller and a bit easier to work with. There are pretty much three steps here
- Generate individual MatrixTables from the existing Hail Tables for each tissue type, there are 49 tissue types in total.
- Perform a multi-way union cols (MWUC) on these 49 MatrixTables to create a single MatrixTable where there is a column for each tissue.
- After the MWUC the resulting MatrixTable has pretty imbalanced partitions (some are KiBs, others are GiBs) so we have to repartition the unioned MatrixTable.
eQTL tissue-specific all SNP gene associations
Generate individual MatrixTables from the existing Hail Tables for each tissue type (49 total).
Write output to gs://hail-datasets-tmp/GTEx/eQTL_MatrixTables/.
End of explanation
from typing import List
def multi_way_union_cols(mts: List[hl.MatrixTable], column_keys: List[str]) -> hl.MatrixTable:
missing_struct = "struct{ma_samples: int32, ma_count: int32, maf: float64, pval_nominal: float64, slope: float64, slope_se: float64}"
mts = [mt._localize_entries("_mt_entries", "_mt_cols") for mt in mts]
joined = hl.Table.multi_way_zip_join(mts, "_t_entries", "_t_cols")
joined = joined.annotate(_t_entries_missing = joined._t_entries.map(lambda x: hl.is_missing(x)))
rows = [(r, joined._t_entries.map(lambda x: x[r])[0])
for r in joined._t_entries.dtype.element_type.fields
if r != "_mt_entries"]
Need to provide a dummy array<struct> for if tissues are not present to make sure missing elements not
dropped from flattened array.
Otherwise we will get a HailException: length mismatch between entry array and column array in
'to_matrix_table_row_major'.
entries = [("_t_entries_flatten",
hl.flatten(
joined._t_entries.map(
lambda x: hl.if_else(
hl.is_defined(x),
x._mt_entries,
hl.array([
hl.struct(
ma_samples = hl.missing(hl.tint32),
ma_count = hl.missing(hl.tint32),
maf = hl.missing(hl.tfloat64),
pval_nominal = hl.missing(hl.tfloat64),
slope = hl.missing(hl.tfloat64),
slope_se = hl.missing(hl.tfloat64)
)
])
)
)
)
)]
joined = joined.annotate(**dict(rows + entries))
Also want to make sure that if entry is missing, it is replaced with a missing struct of the same form
at the same index in the array.
joined = joined.annotate(_t_entries_new = hl.zip(joined._t_entries_missing,
joined._t_entries_flatten,
fill_missing=False))
joined = joined.annotate(
_t_entries_new = joined._t_entries_new.map(
lambda x: hl.if_else(x[0] == True, hl.missing(missing_struct), x[1])
)
)
joined = joined.annotate_globals(_t_cols = hl.flatten(joined._t_cols.map(lambda x: x._mt_cols)))
joined = joined.drop("_t_entries", "_t_entries_missing", "_t_entries_flatten")
mt = joined._unlocalize_entries("_t_entries_new", "_t_cols", ["tissue"])
return mt
Explanation: To ensure that everything is joined correctly later on, we add both the _gene_id and tss_distance fields to the table keys here.
After the unioned MatrixTable is created we will re-key the rows to just be ["locus", "alleles"], and rename the fields above back to gene_id and tss_distance (they will now be row fields).
Perform multi-way union cols (MWUC) on MatrixTables generated above
The function below was used to take a list of MatrixTables and a list with the column key fields and output a single MatrixTable with the columns unioned.
End of explanation
# Get list of file paths for individual eQTL MatrixTables
list_eqtl_mts = subprocess.run(["gsutil", "-u", "broad-ctsa", "ls", "gs://hail-datasets-tmp/GTEx/eQTL_MatrixTables"],
stdout=subprocess.PIPE)
eqtl_mts = list_eqtl_mts.stdout.decode("utf-8").split()
# Load MatrixTables for each tissue type to store in list for MWUC
mts_list = []
for eqtl_mt in eqtl_mts:
tissue_name = eqtl_mt.replace("gs://hail-datasets-tmp/GTEx/eQTL_MatrixTables/GTEx_eQTL_all_snp_gene_associations_", "")
tissue_name = tissue_name.replace("_v8_GRCh38.mt/", "")
print(tissue_name)
mt = hl.read_matrix_table(eqtl_mt)
mts_list.append(mt)
full_mt = multi_way_union_cols(mts_list, ["tissue"])
full_mt = full_mt.checkpoint("gs://hail-datasets-tmp/GTEx/checkpoints/GTEx_eQTL_all_snp_gene_associations_cols_unioned.mt",
overwrite=False,
_read_if_exists=True)
Explanation: Now we can read in each individual MatrixTable and add it to the list we will pass to multi_way_union_cols.
End of explanation
# Re-key rows and repartition
full_mt = hl.read_matrix_table("gs://hail-datasets-tmp/GTEx/checkpoints/GTEx_eQTL_all_snp_gene_associations_cols_unioned.mt",
_n_partitions=1000)
full_mt = full_mt.key_rows_by("locus", "alleles")
full_mt = full_mt.checkpoint("gs://hail-datasets-tmp/GTEx/GTEx_eQTL_all_snp_gene_associations.mt",
overwrite=False,
_read_if_exists=True)
full_mt.describe()
Explanation: Repartition unioned MatrixTable
After the MWUC the resulting MatrixTable has pretty imbalanced partitions (some are KiBs, others are GiBs) so we want to repartition the unioned MatrixTable.
First we can re-key the rows of our MatrixTable:
End of explanation
# Add metadata to globals and write final MatrixTable to hail-datasets-us
full_mt = hl.read_matrix_table("gs://hail-datasets-tmp/GTEx/GTEx_eQTL_all_snp_gene_associations.mt")
full_mt = full_mt.repartition(1000, shuffle=True)
n_rows, n_cols = full_mt.count()
n_partitions = full_mt.n_partitions()
full_mt = full_mt.rename({"_gene_id": "gene_id", "_tss_distance": "tss_distance"})
full_mt = full_mt.annotate_globals(
metadata = hl.struct(name = "GTEx_eQTL_all_snp_gene_associations",
reference_genome = "GRCh38",
n_rows = n_rows,
n_cols = n_cols,
n_partitions = n_partitions)
)
# Final eQTL MatrixTable is ~224 GiB w/ 1000 partitions
full_mt.write("gs://hail-datasets-us/GTEx_eQTL_all_snp_gene_associations_v8_GRCh38.mt")
Explanation: I tried reading in the MatrixTable with _n_partitions=1000 to see how our partitions would look, but we still had a few that were much larger than the rest. So after this I ended up doing using repartition with a full shuffle, and it balanced things out.
End of explanation
hl.read_matrix_table("gs://hail-datasets-us/GTEx_eQTL_all_snp_gene_associations_v8_GRCh38.mt").describe()
Explanation: And now we have a single MatrixTable for the GTEx eQTL data.
End of explanation |
8,455 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Instant Recognition with Caffe
In this example we'll classify an image with the bundled CaffeNet model based on the network architecture of Krizhevsky et al. for ImageNet. We'll compare CPU and GPU operation then reach into the model to inspect features and the output.
(These feature visualizations follow the DeCAF visualizations originally by Yangqing Jia.)
First, import required modules, set plotting parameters, and run ./scripts/download_model_binary.py models/bvlc_reference_caffenet to get the pretrained CaffeNet model if it hasn't already been fetched.
Step1: Set Caffe to CPU mode, load the net in the test phase for inference, and configure input preprocessing.
Step2: Let's start with a simple classification. We'll set a batch of 50 to demonstrate batch processing, even though we'll only be classifying one image. (Note that the batch size can also be changed on-the-fly.)
Step3: Feed in the image (with some preprocessing) and classify with a forward pass.
Step4: What did the input look like?
Step5: Adorable, but was our classification correct?
Step6: Indeed! But how long did it take?
Step7: That's a while, even for a batch size of 50 images. Let's switch to GPU mode.
Step8: Much better. Now let's look at the net in more detail.
First, the layer features and their shapes (1 is the batch size, corresponding to the single input image in this example).
Step9: The parameters and their shapes. The parameters are net.params['name'][0] while biases are net.params['name'][1].
Step10: Helper functions for visualization
Step11: The input image
The first layer filters, conv1
Step12: The first layer output, conv1 (rectified responses of the filters above, first 36 only)
Step13: The second layer filters, conv2
There are 256 filters, each of which has dimension 5 x 5 x 48. We show only the first 48 filters, with each channel shown separately, so that each filter is a row.
Step14: The second layer output, conv2 (rectified, only the first 36 of 256 channels)
Step15: The third layer output, conv3 (rectified, all 384 channels)
Step16: The fourth layer output, conv4 (rectified, all 384 channels)
Step17: The fifth layer output, conv5 (rectified, all 256 channels)
Step18: The fifth layer after pooling, pool5
Step19: The first fully connected layer, fc6 (rectified)
We show the output values and the histogram of the positive values
Step20: The second fully connected layer, fc7 (rectified)
Step21: The final probability output, prob
Step22: Let's see the top 5 predicted labels. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Make sure that caffe is on the python path:
caffe_root = '../' # this file is expected to be in {caffe_root}/examples
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
plt.rcParams['figure.figsize'] = (10, 10)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
import os
if not os.path.isfile(caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'):
print("Downloading pre-trained CaffeNet model...")
!../scripts/download_model_binary.py ../models/bvlc_reference_caffenet
Explanation: Instant Recognition with Caffe
In this example we'll classify an image with the bundled CaffeNet model based on the network architecture of Krizhevsky et al. for ImageNet. We'll compare CPU and GPU operation then reach into the model to inspect features and the output.
(These feature visualizations follow the DeCAF visualizations originally by Yangqing Jia.)
First, import required modules, set plotting parameters, and run ./scripts/download_model_binary.py models/bvlc_reference_caffenet to get the pretrained CaffeNet model if it hasn't already been fetched.
End of explanation
caffe.set_mode_cpu()
net = caffe.Net(caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt',
caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel',
caffe.TEST)
# input preprocessing: 'data' is the name of the input blob == net.inputs[0]
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1))
transformer.set_mean('data', np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1)) # mean pixel
transformer.set_raw_scale('data', 255) # the reference model operates on images in [0,255] range instead of [0,1]
transformer.set_channel_swap('data', (2,1,0)) # the reference model has channels in BGR order instead of RGB
Explanation: Set Caffe to CPU mode, load the net in the test phase for inference, and configure input preprocessing.
End of explanation
# set net to batch size of 50
net.blobs['data'].reshape(50,3,227,227)
Explanation: Let's start with a simple classification. We'll set a batch of 50 to demonstrate batch processing, even though we'll only be classifying one image. (Note that the batch size can also be changed on-the-fly.)
End of explanation
net.blobs['data'].data[...] = transformer.preprocess('data', caffe.io.load_image(caffe_root + 'examples/images/elephants.jpg'))
out = net.forward()
print("Predicted class is #{}.".format(out['prob'][0].argmax()))
Explanation: Feed in the image (with some preprocessing) and classify with a forward pass.
End of explanation
plt.imshow(transformer.deprocess('data', net.blobs['data'].data[0]))
Explanation: What did the input look like?
End of explanation
# load labels
imagenet_labels_filename = caffe_root + 'data/ilsvrc12/synset_words.txt'
try:
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
except:
!../data/ilsvrc12/get_ilsvrc_aux.sh
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
# sort top k predictions from softmax output
top_k = net.blobs['prob'].data[0].flatten().argsort()[-1:-6:-1]
print labels[top_k]
Explanation: Adorable, but was our classification correct?
End of explanation
# CPU mode
net.forward() # call once for allocation
%timeit net.forward()
Explanation: Indeed! But how long did it take?
End of explanation
# GPU mode
caffe.set_device(0)
caffe.set_mode_gpu()
net.forward() # call once for allocation
%timeit net.forward()
Explanation: That's a while, even for a batch size of 50 images. Let's switch to GPU mode.
End of explanation
[(k, v.data.shape) for k, v in net.blobs.items()]
Explanation: Much better. Now let's look at the net in more detail.
First, the layer features and their shapes (1 is the batch size, corresponding to the single input image in this example).
End of explanation
[(k, v[0].data.shape) for k, v in net.params.items()]
Explanation: The parameters and their shapes. The parameters are net.params['name'][0] while biases are net.params['name'][1].
End of explanation
# take an array of shape (n, height, width) or (n, height, width, channels)
# and visualize each (height, width) thing in a grid of size approx. sqrt(n) by sqrt(n)
def vis_square(data, padsize=1, padval=0):
data -= data.min()
data /= data.max()
# force the number of filters to be square
n = int(np.ceil(np.sqrt(data.shape[0])))
padding = ((0, n ** 2 - data.shape[0]), (0, padsize), (0, padsize)) + ((0, 0),) * (data.ndim - 3)
data = np.pad(data, padding, mode='constant', constant_values=(padval, padval))
# tile the filters into an image
data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))
data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])
plt.imshow(data)
Explanation: Helper functions for visualization
End of explanation
# the parameters are a list of [weights, biases]
filters = net.params['conv1'][0].data
vis_square(filters.transpose(0, 2, 3, 1))
Explanation: The input image
The first layer filters, conv1
End of explanation
feat = net.blobs['conv1'].data[0, :36]
vis_square(feat, padval=1)
Explanation: The first layer output, conv1 (rectified responses of the filters above, first 36 only)
End of explanation
filters = net.params['conv2'][0].data
vis_square(filters[:48].reshape(48**2, 5, 5))
Explanation: The second layer filters, conv2
There are 256 filters, each of which has dimension 5 x 5 x 48. We show only the first 48 filters, with each channel shown separately, so that each filter is a row.
End of explanation
feat = net.blobs['conv2'].data[0, :36]
vis_square(feat, padval=1)
Explanation: The second layer output, conv2 (rectified, only the first 36 of 256 channels)
End of explanation
feat = net.blobs['conv3'].data[0]
vis_square(feat, padval=0.5)
Explanation: The third layer output, conv3 (rectified, all 384 channels)
End of explanation
feat = net.blobs['conv4'].data[0]
vis_square(feat, padval=0.5)
Explanation: The fourth layer output, conv4 (rectified, all 384 channels)
End of explanation
feat = net.blobs['conv5'].data[0]
vis_square(feat, padval=0.5)
Explanation: The fifth layer output, conv5 (rectified, all 256 channels)
End of explanation
feat = net.blobs['pool5'].data[0]
vis_square(feat, padval=1)
Explanation: The fifth layer after pooling, pool5
End of explanation
feat = net.blobs['fc6'].data[0]
plt.subplot(2, 1, 1)
plt.plot(feat.flat)
plt.subplot(2, 1, 2)
_ = plt.hist(feat.flat[feat.flat > 0], bins=100)
Explanation: The first fully connected layer, fc6 (rectified)
We show the output values and the histogram of the positive values
End of explanation
feat = net.blobs['fc7'].data[0]
plt.subplot(2, 1, 1)
plt.plot(feat.flat)
plt.subplot(2, 1, 2)
_ = plt.hist(feat.flat[feat.flat > 0], bins=100)
Explanation: The second fully connected layer, fc7 (rectified)
End of explanation
feat = net.blobs['prob'].data[0]
plt.plot(feat.flat)
Explanation: The final probability output, prob
End of explanation
# load labels
imagenet_labels_filename = caffe_root + 'data/ilsvrc12/synset_words.txt'
try:
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
except:
!../data/ilsvrc12/get_ilsvrc_aux.sh
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
# sort top k predictions from softmax output
top_k = net.blobs['prob'].data[0].flatten().argsort()[-1:-6:-1]
print labels[top_k]
Explanation: Let's see the top 5 predicted labels.
End of explanation |
8,456 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ChainerRL Quickstart Guide
This is a quickstart guide for users who just want to try ChainerRL for the first time.
If you have not yet installed ChainerRL, run the command below to install it
Step1: ChainerRL can be used for any problems if they are modeled as "environments". OpenAI Gym provides various kinds of benchmark environments and defines the common interface among them. ChainerRL uses a subset of the interface. Specifically, an environment must define its observation space and action space and have at least two methods
Step3: Now you have defined your environment. Next, you need to define an agent, which will learn through interactions with the environment.
ChainerRL provides various agents, each of which implements a deep reinforcement learning algorithm.
To use DQN (Deep Q-Network), you need to define a Q-function that receives an observation and returns an expected future return for each action the agent can take. In ChainerRL, you can define your Q-function as chainer.Link as below. Note that the outputs are wrapped by chainerrl.action_value.DiscreteActionValue, which implements chainerrl.action_value.ActionValue. By wrapping the outputs of Q-functions, ChainerRL can treat discrete-action Q-functions like this and NAFs (Normalized Advantage Functions) in the same way.
Step4: If you want to use CUDA for computation, as usual as in Chainer, call to_gpu.
Step5: You can also use ChainerRL's predefined Q-functions.
Step6: As in Chainer, chainer.Optimizer is used to update models.
Step7: A Q-function and its optimizer are used by a DQN agent. To create a DQN agent, you need to specify a bit more parameters and configurations.
Step8: Now you have an agent and an environment. It's time to start reinforcement learning!
In training, use agent.act_and_train to select exploratory actions. agent.stop_episode_and_train must be called after finishing an episode. You can get training statistics of the agent via agent.get_statistics.
Step9: Now you finished training the agent. How good is the agent now? You can test it by using agent.act and agent.stop_episode instead. Exploration such as epsilon-greedy is not used anymore.
Step10: If test scores are good enough, the only remaining task is to save the agent so that you can reuse it. What you need to do is to simply call agent.save to save the agent, then agent.load to load the saved agent.
Step11: RL completed!
But writing code like this every time you use RL might be boring. So, ChainerRL has utility functions that do these things. | Python Code:
import chainer
import chainer.functions as F
import chainer.links as L
import chainerrl
import gym
import numpy as np
Explanation: ChainerRL Quickstart Guide
This is a quickstart guide for users who just want to try ChainerRL for the first time.
If you have not yet installed ChainerRL, run the command below to install it:
pip install chainerrl
If you have already installed ChainerRL, let's begin!
First, you need to import necessary modules. The module name of ChainerRL is chainerrl. Let's import gym and numpy as well since they are used later.
End of explanation
env = gym.make('CartPole-v0')
print('observation space:', env.observation_space)
print('action space:', env.action_space)
obs = env.reset()
env.render()
print('initial observation:', obs)
action = env.action_space.sample()
obs, r, done, info = env.step(action)
print('next observation:', obs)
print('reward:', r)
print('done:', done)
print('info:', info)
Explanation: ChainerRL can be used for any problems if they are modeled as "environments". OpenAI Gym provides various kinds of benchmark environments and defines the common interface among them. ChainerRL uses a subset of the interface. Specifically, an environment must define its observation space and action space and have at least two methods: reset and step.
env.reset will reset the environment to the initial state and return the initial observation.
env.step will execute a given action, move to the next state and return four values:
a next observation
a scalar reward
a boolean value indicating whether the current state is terminal or not
additional information
env.render will render the current state.
Let's try 'CartPole-v0', which is a classic control problem. You can see below that its observation space consists of four real numbers while its action space consists of two discrete actions.
End of explanation
class QFunction(chainer.Chain):
def __init__(self, obs_size, n_actions, n_hidden_channels=50):
super().__init__()
with self.init_scope():
self.l0 = L.Linear(obs_size, n_hidden_channels)
self.l1 = L.Linear(n_hidden_channels, n_hidden_channels)
self.l2 = L.Linear(n_hidden_channels, n_actions)
def __call__(self, x, test=False):
Args:
x (ndarray or chainer.Variable): An observation
test (bool): a flag indicating whether it is in test mode
h = F.tanh(self.l0(x))
h = F.tanh(self.l1(h))
return chainerrl.action_value.DiscreteActionValue(self.l2(h))
obs_size = env.observation_space.shape[0]
n_actions = env.action_space.n
q_func = QFunction(obs_size, n_actions)
Explanation: Now you have defined your environment. Next, you need to define an agent, which will learn through interactions with the environment.
ChainerRL provides various agents, each of which implements a deep reinforcement learning algorithm.
To use DQN (Deep Q-Network), you need to define a Q-function that receives an observation and returns an expected future return for each action the agent can take. In ChainerRL, you can define your Q-function as chainer.Link as below. Note that the outputs are wrapped by chainerrl.action_value.DiscreteActionValue, which implements chainerrl.action_value.ActionValue. By wrapping the outputs of Q-functions, ChainerRL can treat discrete-action Q-functions like this and NAFs (Normalized Advantage Functions) in the same way.
End of explanation
# Uncomment to use CUDA
# q_func.to_gpu(0)
Explanation: If you want to use CUDA for computation, as usual as in Chainer, call to_gpu.
End of explanation
_q_func = chainerrl.q_functions.FCStateQFunctionWithDiscreteAction(
obs_size, n_actions,
n_hidden_layers=2, n_hidden_channels=50)
Explanation: You can also use ChainerRL's predefined Q-functions.
End of explanation
# Use Adam to optimize q_func. eps=1e-2 is for stability.
optimizer = chainer.optimizers.Adam(eps=1e-2)
optimizer.setup(q_func)
Explanation: As in Chainer, chainer.Optimizer is used to update models.
End of explanation
# Set the discount factor that discounts future rewards.
gamma = 0.95
# Use epsilon-greedy for exploration
explorer = chainerrl.explorers.ConstantEpsilonGreedy(
epsilon=0.3, random_action_func=env.action_space.sample)
# DQN uses Experience Replay.
# Specify a replay buffer and its capacity.
replay_buffer = chainerrl.replay_buffer.ReplayBuffer(capacity=10 ** 6)
# Since observations from CartPole-v0 is numpy.float64 while
# Chainer only accepts numpy.float32 by default, specify
# a converter as a feature extractor function phi.
phi = lambda x: x.astype(np.float32, copy=False)
# Now create an agent that will interact with the environment.
agent = chainerrl.agents.DoubleDQN(
q_func, optimizer, replay_buffer, gamma, explorer,
replay_start_size=500, update_interval=1,
target_update_interval=100, phi=phi)
Explanation: A Q-function and its optimizer are used by a DQN agent. To create a DQN agent, you need to specify a bit more parameters and configurations.
End of explanation
n_episodes = 200
max_episode_len = 200
for i in range(1, n_episodes + 1):
obs = env.reset()
reward = 0
done = False
R = 0 # return (sum of rewards)
t = 0 # time step
while not done and t < max_episode_len:
# Uncomment to watch the behaviour
# env.render()
action = agent.act_and_train(obs, reward)
obs, reward, done, _ = env.step(action)
R += reward
t += 1
if i % 10 == 0:
print('episode:', i,
'R:', R,
'statistics:', agent.get_statistics())
agent.stop_episode_and_train(obs, reward, done)
print('Finished.')
Explanation: Now you have an agent and an environment. It's time to start reinforcement learning!
In training, use agent.act_and_train to select exploratory actions. agent.stop_episode_and_train must be called after finishing an episode. You can get training statistics of the agent via agent.get_statistics.
End of explanation
for i in range(10):
obs = env.reset()
done = False
R = 0
t = 0
while not done and t < 200:
env.render()
action = agent.act(obs)
obs, r, done, _ = env.step(action)
R += r
t += 1
print('test episode:', i, 'R:', R)
agent.stop_episode()
Explanation: Now you finished training the agent. How good is the agent now? You can test it by using agent.act and agent.stop_episode instead. Exploration such as epsilon-greedy is not used anymore.
End of explanation
# Save an agent to the 'agent' directory
agent.save('agent')
# Uncomment to load an agent from the 'agent' directory
# agent.load('agent')
Explanation: If test scores are good enough, the only remaining task is to save the agent so that you can reuse it. What you need to do is to simply call agent.save to save the agent, then agent.load to load the saved agent.
End of explanation
# Set up the logger to print info messages for understandability.
import logging
import sys
logging.basicConfig(level=logging.INFO, stream=sys.stdout, format='')
chainerrl.experiments.train_agent_with_evaluation(
agent, env,
steps=2000, # Train the agent for 2000 steps
eval_n_steps=None, # We evaluate for episodes, not time
eval_n_episodes=10, # 10 episodes are sampled for each evaluation
train_max_episode_len=200, # Maximum length of each episode
eval_interval=1000, # Evaluate the agent after every 1000 steps
outdir='result') # Save everything to 'result' directory
Explanation: RL completed!
But writing code like this every time you use RL might be boring. So, ChainerRL has utility functions that do these things.
End of explanation |
8,457 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Masking and padding with Keras
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 시작하기
마스킹 은 시퀀스 처리 레이어에 입력의 특정 시간 단계가 누락되어 데이터를 처리 할 때 건너 뛰도록 지시하는 방법입니다.
패딩은 마스킹된 스텝이 시퀀스의 시작 또는 도입부에 위치하는 특수한 형태의 마스킹입니다. 패딩이 필요한 이유는 시퀀스 데이터를 연속 배치로 인코딩해야 하는 데 있습니다. 배치의 모든 시퀀스를 지정된 표준 길이에 맞추려면 일부 시퀀스를 패딩 처리하거나 잘라내야 합니다.
자세히 살펴 보자.
패딩 시퀀스 데이터
시퀀스 데이터를 처리 할 때 개별 샘플의 길이가 다른 것이 매우 일반적입니다. 다음 예제를 고려하십시오 (텍스트로 단어로 토큰 화됨).
[
["Hello", "world", "!"],
["How", "are", "you", "doing", "today"],
["The", "weather", "will", "be", "nice", "tomorrow"],
]
어휘 조회 후 데이터는 다음과 같이 정수로 벡터화 될 수 있습니다.
[
[71, 1331, 4231]
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
데이터는 개별 샘플의 길이가 각각 3, 5 및 6인 중첩된 목록입니다. 딥 러닝 모델의 입력 데이터는 단일 텐서(이 경우, 예를 들어 (batch_size, 6, vocab_size) 형상의 텐서)여야 하므로 가장 긴 항목보다 짧은 샘플은 일부 자리 표시자 값으로 패딩 처리해야 합니다(또는, 짧은 샘플을 패딩 처리하기 전에 긴 샘플을 잘라낼 수도 있음).
Keras는 Python 목록을 잘라서 공통 길이로 패딩 처리하는 유틸리티 기능을 제공합니다
Step3: 마스킹
이제 모든 샘플의 길이가 균일하므로 데이터의 일부가 실제로 채워져 있다는 사실을 모델에 알려야합니다. 그 메커니즘은 마스킹 입니다.
Keras 모델에서 입력 마스크를 도입하는 세 가지 방법이 있습니다.
keras.layers.Masking 레이어를 추가하십시오.
mask_zero=True 로 keras.layers.Embedding 레이어를 구성하십시오.
이 인수를 지원하는 계층 (예
Step4: 인쇄 된 결과에서 볼 수 있듯이 마스크는 모양이 (batch_size, sequence_length) 인 2D 부울 텐서이며, 각 개별 False 항목은 처리 중에 해당 시간 단계를 무시해야 함을 나타냅니다.
Functional API 및 Sequential API의 마스크 전파
Functional API 또는 Sequential API를 사용하는 경우 Embedding 또는 Masking 계층에서 생성 된 마스크는이를 사용할 수있는 계층 (예
Step5: 다음과 같은 기능적 API 모델의 경우에도 마찬가지입니다.
Step6: 마스크 텐서를 레이어로 직접 전달
마스크를 처리 할 수있는 레이어 (예
Step8: 사용자 정의 레이어에서 마스킹 지원
때로는 마스크를 생성하는 레이어(예
Step9: 입력 값에서 마스크를 생성 할 수있는 CustomEmbedding 레이어의 다른 예는 다음과 같습니다.
Step10: 호환 가능한 레이어에서 전파를 마스크하도록 선택
대부분의 레이어는 시간 차원을 수정하지 않으므로 현재 마스크를 수정할 필요가 없습니다. 그러나 그들은 여전히 현재 마스크를 변경하지 않고 다음 레이어로 전파 할 수 있기를 원할 수 있습니다. 이것은 옵트 인 동작입니다. 기본적으로 사용자 정의 레이어는 현재 마스크를 제거합니다 (프레임 워크에서 마스크 전파가 안전한지 여부를 알 수있는 방법이 없기 때문에).
시간 차원을 수정하지 않는 사용자 정의 레이어가 있고 현재 입력 마스크를 전파 할 수 있으려면 레이어 생성자에서 self.supports_masking = True 를 설정해야합니다. 이 경우 compute_mask() 의 기본 동작은 현재 마스크를 통과하는 것입니다.
마스크 전파가 허용 된 레이어의 예는 다음과 같습니다.
Step11: 이제 마스크를 생성하는 레이어(예
Step12: 마스크 정보가 필요한 레이어 작성
일부 레이어는 마스크 소비자입니다. 이러한 레이어는 call에서 mask 인수를 허용하고 이를 사용하여 특정 타임스텝을 건너뛸지 여부를 결정합니다.
이러한 계층을 작성하려면 call 서명에 mask=None 인수를 추가하면됩니다. 입력과 관련된 마스크는 가능할 때마다 레이어로 전달됩니다.
다음은 간단한 예입니다. 마스크 된 시간 단계를 버리고 입력 시퀀스의 시간 차원 (축 1)에 대한 소프트 맥스를 계산하는 레이어입니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: Masking and padding with Keras
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/keras/masking_and_padding"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/masking_and_padding.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/masking_and_padding.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/keras/masking_and_padding.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
설정
End of explanation
raw_inputs = [
[711, 632, 71],
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
# By default, this will pad using 0s; it is configurable via the
# "value" parameter.
# Note that you could "pre" padding (at the beginning) or
# "post" padding (at the end).
# We recommend using "post" padding when working with RNN layers
# (in order to be able to use the
# CuDNN implementation of the layers).
padded_inputs = tf.keras.preprocessing.sequence.pad_sequences(
raw_inputs, padding="post"
)
print(padded_inputs)
Explanation: 시작하기
마스킹 은 시퀀스 처리 레이어에 입력의 특정 시간 단계가 누락되어 데이터를 처리 할 때 건너 뛰도록 지시하는 방법입니다.
패딩은 마스킹된 스텝이 시퀀스의 시작 또는 도입부에 위치하는 특수한 형태의 마스킹입니다. 패딩이 필요한 이유는 시퀀스 데이터를 연속 배치로 인코딩해야 하는 데 있습니다. 배치의 모든 시퀀스를 지정된 표준 길이에 맞추려면 일부 시퀀스를 패딩 처리하거나 잘라내야 합니다.
자세히 살펴 보자.
패딩 시퀀스 데이터
시퀀스 데이터를 처리 할 때 개별 샘플의 길이가 다른 것이 매우 일반적입니다. 다음 예제를 고려하십시오 (텍스트로 단어로 토큰 화됨).
[
["Hello", "world", "!"],
["How", "are", "you", "doing", "today"],
["The", "weather", "will", "be", "nice", "tomorrow"],
]
어휘 조회 후 데이터는 다음과 같이 정수로 벡터화 될 수 있습니다.
[
[71, 1331, 4231]
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
데이터는 개별 샘플의 길이가 각각 3, 5 및 6인 중첩된 목록입니다. 딥 러닝 모델의 입력 데이터는 단일 텐서(이 경우, 예를 들어 (batch_size, 6, vocab_size) 형상의 텐서)여야 하므로 가장 긴 항목보다 짧은 샘플은 일부 자리 표시자 값으로 패딩 처리해야 합니다(또는, 짧은 샘플을 패딩 처리하기 전에 긴 샘플을 잘라낼 수도 있음).
Keras는 Python 목록을 잘라서 공통 길이로 패딩 처리하는 유틸리티 기능을 제공합니다: tf.keras.preprocessing.sequence.pad_sequences.
End of explanation
embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
masked_output = embedding(padded_inputs)
print(masked_output._keras_mask)
masking_layer = layers.Masking()
# Simulate the embedding lookup by expanding the 2D input to 3D,
# with embedding dimension of 10.
unmasked_embedding = tf.cast(
tf.tile(tf.expand_dims(padded_inputs, axis=-1), [1, 1, 10]), tf.float32
)
masked_embedding = masking_layer(unmasked_embedding)
print(masked_embedding._keras_mask)
Explanation: 마스킹
이제 모든 샘플의 길이가 균일하므로 데이터의 일부가 실제로 채워져 있다는 사실을 모델에 알려야합니다. 그 메커니즘은 마스킹 입니다.
Keras 모델에서 입력 마스크를 도입하는 세 가지 방법이 있습니다.
keras.layers.Masking 레이어를 추가하십시오.
mask_zero=True 로 keras.layers.Embedding 레이어를 구성하십시오.
이 인수를 지원하는 계층 (예 : RNN 계층)을 호출 할 때 mask 인수를 수동으로 전달하십시오.
마스크 생성 레이어 : Embedding 및 Masking
후드 아래에서이 레이어는 마스크 텐서 (모양 (batch, sequence_length) 가진 2D 텐서)를 만들어 Masking 또는 Embedding 레이어에서 반환 한 텐서 출력에 연결합니다.
End of explanation
model = keras.Sequential(
[layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True), layers.LSTM(32),]
)
Explanation: 인쇄 된 결과에서 볼 수 있듯이 마스크는 모양이 (batch_size, sequence_length) 인 2D 부울 텐서이며, 각 개별 False 항목은 처리 중에 해당 시간 단계를 무시해야 함을 나타냅니다.
Functional API 및 Sequential API의 마스크 전파
Functional API 또는 Sequential API를 사용하는 경우 Embedding 또는 Masking 계층에서 생성 된 마스크는이를 사용할 수있는 계층 (예 : RNN 계층)에 대해 네트워크를 통해 전파됩니다. Keras는 입력에 해당하는 마스크를 자동으로 가져 와서 사용 방법을 알고있는 레이어로 전달합니다.
예를 들어, 다음 순차 모델에서 LSTM 레이어는 마스크를 자동으로 수신하므로 패딩 처리된 값을 무시합니다.
End of explanation
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
outputs = layers.LSTM(32)(x)
model = keras.Model(inputs, outputs)
Explanation: 다음과 같은 기능적 API 모델의 경우에도 마찬가지입니다.
End of explanation
class MyLayer(layers.Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
self.embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
self.lstm = layers.LSTM(32)
def call(self, inputs):
x = self.embedding(inputs)
# Note that you could also prepare a `mask` tensor manually.
# It only needs to be a boolean tensor
# with the right shape, i.e. (batch_size, timesteps).
mask = self.embedding.compute_mask(inputs)
output = self.lstm(x, mask=mask) # The layer will ignore the masked values
return output
layer = MyLayer()
x = np.random.random((32, 10)) * 100
x = x.astype("int32")
layer(x)
Explanation: 마스크 텐서를 레이어로 직접 전달
마스크를 처리 할 수있는 레이어 (예 : LSTM 레이어)는 __call__ 메서드에 mask 인수가 있습니다.
한편 마스크 (예 : Embedding )를 생성하는 레이어는 호출 할 수있는 compute_mask(input, previous_mask) 메소드를 노출합니다.
따라서 마스크 생성 레이어의 compute_mask() 메서드 출력을 다음과 같이 마스크 소비 레이어의 __call__ 메서드로 전달할 수 있습니다.
End of explanation
class TemporalSplit(keras.layers.Layer):
Split the input tensor into 2 tensors along the time dimension.
def call(self, inputs):
# Expect the input to be 3D and mask to be 2D, split the input tensor into 2
# subtensors along the time axis (axis 1).
return tf.split(inputs, 2, axis=1)
def compute_mask(self, inputs, mask=None):
# Also split the mask into 2 if it presents.
if mask is None:
return None
return tf.split(mask, 2, axis=1)
first_half, second_half = TemporalSplit()(masked_embedding)
print(first_half._keras_mask)
print(second_half._keras_mask)
Explanation: 사용자 정의 레이어에서 마스킹 지원
때로는 마스크를 생성하는 레이어(예: Embedding) 또는 현재 마스크를 수정해야 하는 레이어를 작성해야 할 수도 있습니다.
예를 들어, 시간 차원에서 연결된 Concatenate 레이어와 같이 입력과 다른 시간 차원을 가진 텐서를 생성하는 레이어는 다운스트림 레이어가 마스킹된 타임스텝을 올바르게 처리할 수 있도록 현재 마스크를 수정해야 합니다.
To do this, your layer should implement the layer.compute_mask() method, which produces a new mask given the input and the current mask.
Here is an example of a TemporalSplit layer that needs to modify the current mask.
End of explanation
class CustomEmbedding(keras.layers.Layer):
def __init__(self, input_dim, output_dim, mask_zero=False, **kwargs):
super(CustomEmbedding, self).__init__(**kwargs)
self.input_dim = input_dim
self.output_dim = output_dim
self.mask_zero = mask_zero
def build(self, input_shape):
self.embeddings = self.add_weight(
shape=(self.input_dim, self.output_dim),
initializer="random_normal",
dtype="float32",
)
def call(self, inputs):
return tf.nn.embedding_lookup(self.embeddings, inputs)
def compute_mask(self, inputs, mask=None):
if not self.mask_zero:
return None
return tf.not_equal(inputs, 0)
layer = CustomEmbedding(10, 32, mask_zero=True)
x = np.random.random((3, 10)) * 9
x = x.astype("int32")
y = layer(x)
mask = layer.compute_mask(x)
print(mask)
Explanation: 입력 값에서 마스크를 생성 할 수있는 CustomEmbedding 레이어의 다른 예는 다음과 같습니다.
End of explanation
class MyActivation(keras.layers.Layer):
def __init__(self, **kwargs):
super(MyActivation, self).__init__(**kwargs)
# Signal that the layer is safe for mask propagation
self.supports_masking = True
def call(self, inputs):
return tf.nn.relu(inputs)
Explanation: 호환 가능한 레이어에서 전파를 마스크하도록 선택
대부분의 레이어는 시간 차원을 수정하지 않으므로 현재 마스크를 수정할 필요가 없습니다. 그러나 그들은 여전히 현재 마스크를 변경하지 않고 다음 레이어로 전파 할 수 있기를 원할 수 있습니다. 이것은 옵트 인 동작입니다. 기본적으로 사용자 정의 레이어는 현재 마스크를 제거합니다 (프레임 워크에서 마스크 전파가 안전한지 여부를 알 수있는 방법이 없기 때문에).
시간 차원을 수정하지 않는 사용자 정의 레이어가 있고 현재 입력 마스크를 전파 할 수 있으려면 레이어 생성자에서 self.supports_masking = True 를 설정해야합니다. 이 경우 compute_mask() 의 기본 동작은 현재 마스크를 통과하는 것입니다.
마스크 전파가 허용 된 레이어의 예는 다음과 같습니다.
End of explanation
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
x = MyActivation()(x) # Will pass the mask along
print("Mask found:", x._keras_mask)
outputs = layers.LSTM(32)(x) # Will receive the mask
model = keras.Model(inputs, outputs)
Explanation: 이제 마스크를 생성하는 레이어(예: Embedding)와 마스크를 소비하는 레이어(예: LSTM) 사이에서 이 사용자 정의 레이어를 사용하여 마스크 소비 레이어까지 마스크를 전달할 수 있습니다.
End of explanation
class TemporalSoftmax(keras.layers.Layer):
def call(self, inputs, mask=None):
broadcast_float_mask = tf.expand_dims(tf.cast(mask, "float32"), -1)
inputs_exp = tf.exp(inputs) * broadcast_float_mask
inputs_sum = tf.reduce_sum(inputs * broadcast_float_mask, axis=1, keepdims=True)
return inputs_exp / inputs_sum
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=10, output_dim=32, mask_zero=True)(inputs)
x = layers.Dense(1)(x)
outputs = TemporalSoftmax()(x)
model = keras.Model(inputs, outputs)
y = model(np.random.randint(0, 10, size=(32, 100)), np.random.random((32, 100, 1)))
Explanation: 마스크 정보가 필요한 레이어 작성
일부 레이어는 마스크 소비자입니다. 이러한 레이어는 call에서 mask 인수를 허용하고 이를 사용하여 특정 타임스텝을 건너뛸지 여부를 결정합니다.
이러한 계층을 작성하려면 call 서명에 mask=None 인수를 추가하면됩니다. 입력과 관련된 마스크는 가능할 때마다 레이어로 전달됩니다.
다음은 간단한 예입니다. 마스크 된 시간 단계를 버리고 입력 시퀀스의 시간 차원 (축 1)에 대한 소프트 맥스를 계산하는 레이어입니다.
End of explanation |
8,458 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy 연산
벡터화 연산
NumPy는 코드를 간단하게 만들고 계산 속도를 빠르게 하기 위한 벡터화 연산(vectorized operation)을 지원한다. 벡터화 연산이란 반복문(loop)을 사용하지 않고 선형 대수의 벡터 혹은 행렬 연산과 유사한 코드를 사용하는 것을 말한다.
예를 들어 다음과 같은 연산을 해야 한다고 하자.
$$
x = \begin{bmatrix}1 \ 2 \ 3 \ \vdots \ 100 \end{bmatrix}, \;\;\;\;
y = \begin{bmatrix}101 \ 102 \ 103 \ \vdots \ 200 \end{bmatrix},
$$
$$z = x + y = \begin{bmatrix}1+101 \ 2+102 \ 3+103 \ \vdots \ 100+200 \end{bmatrix}= \begin{bmatrix}102 \ 104 \ 106 \ \vdots \ 300 \end{bmatrix}
$$
만약 NumPy의 벡터화 연산을 사용하지 않는다면 이 연산은 루프를 활용하여 다음과 같이 코딩해야 한다.
Step1: 그러나 NumPy는 벡터화 연산을 지원하므로 다음과 같이 덧셈 연산 하나로 끝난다. 위에서 보인 선형 대수의 벡터 기호를 사용한 연산과 코드가 완전히 동일하다.
Step2: 연산 속도도 벡터화 연산이 훨씬 빠른 것을 볼 수 있다.
Element-Wise 연산
NumPy의 벡터화 연산은 같은 위치의 원소끼리 연산하는 element-wise 연산이다. NumPy의 ndarray를 선형 대수의 벡터나 행렬이라고 했을 때 덧셈, 뺄셈은 NumPy 연산과 일치한다
스칼라와 벡터의 곱도 마찬가지로 선형 대수에서 사용하는 식과 NumPy 코드가 일치한다.
Step3: NumPy 곱셉의 경우에는 행렬의 곱, 즉 내적(inner product, dot product)의 정의와 다르다. 따라서 이 경우에는 별도로 dot이라는 명령 혹은 메서드를 사용해야 한다.
Step4: 비교 연산도 마찬가지로 element-wise 연산이다. 따라서 벡터 혹은 행렬 전체의 원소가 모두 같아야 하는 선형 대수의 비교 연산과는 다르다.
Step5: 만약 배열 전체를 비교하고 싶다면 array_equal 명령을 사용한다.
Step6: 만약 NumPy 에서 제공하는 지수 함수, 로그 함수 등의 수학 함수를 사용하면 element-wise 벡터화 연산을 지원한다.
Step7: 만약 NumPy에서 제공하는 함수를 사용하지 않으면 벡터화 연산은 불가능하다.
Step8: 브로드캐스팅
선형 대수의 행렬 덧셈 혹은 뺄셈을 하려면 두 행렬의 크기가 같아야 한다. 그러나 NumPy에서는 서로 다른 크기를 가진 두 ndarray 배열의 사칙 연산도 지원한다. 이 기능을 브로드캐스팅(broadcasting)이라고 하는데 크기가 작은 배열을 자동으로 반복 확장하여 크기가 큰 배열에 맞추는 방벙이다.
예를 들어 다음과 같이 벡터와 스칼라를 더하는 경우를 생각하자. 선형 대수에서는 이러한 연산이 불가능하다.
$$
x = \begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix}, \;\;\;\;
x + 1 = \begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} + 1 = ?
$$
그러나 NumPy는 브로드캐스팅 기능을 사용하여 스칼라를 벡터와 같은 크기로 확장시켜서 덧셈 계산을 한다.
$$
\begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} \overset{\text{numpy}}+ 1 =
\begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} + \begin{bmatrix}1 \ 1 \ 1 \ 1 \ 1 \end{bmatrix} =
\begin{bmatrix}1 \ 2 \ 3 \ 4 \ 5 \end{bmatrix}
$$
Step9: 브로드캐스팅은 더 차원이 높은 경우에도 적용된다. 다음 그림을 참조하라.
<img src="https
Step10: 차원 축소 연산
ndarray의 하나의 행에 있는 원소를 하나의 데이터 집합으로 보고 평균을 구하면 각 행에 대해 하나의 숫자가 나오게 된다. 예를 들어 10x5 크기의 2차원 배열에 대해 행-평균을 구하면 10개의 숫자를 가진 1차원 벡터가 나오게 된다. 이러한 연산을 차원 축소(dimension reduction) 연산이라고 한다.
ndarray 는 다음과 같은 차원 축소 연산 명령 혹은 메서드를 지원한다.
최대/최소
Step11: 연산의 대상이 2차원 이상인 경우에는 어느 차원으로 계산을 할 지를 axis 인수를 사용하여 지시한다. axis=0인 경우는 열 연산, axis=1인 경우는 행 연산 등으로 사용한다. 디폴트 값은 0이다.
<img src="https
Step12: 정렬
sort 명령이나 메서드를 사용하여 배열 안의 원소를 크기에 따라 정렬하여 새로운 배열을 만들 수도 있다. 2차원 이상인 경우에는 마찬가지로 axis 인수를 사용하여 방향을 결정한다.
Step13: sort 메서드는 해당 객체의 자료 자체가 변화하는 in-place 메서드이므로 사용할 때 주의를 기울여야 한다.
Step14: 만약 자료를 정렬하는 것이 아니라 순서만 알고 싶다면 argsort 명령을 사용한다. | Python Code:
x = np.arange(1, 101)
x
y = np.arange(101, 201)
y
%%time
z = np.zeros_like(x)
for i, (xi, yi) in enumerate(zip(x, y)):
z[i] = xi + yi
z
z
Explanation: NumPy 연산
벡터화 연산
NumPy는 코드를 간단하게 만들고 계산 속도를 빠르게 하기 위한 벡터화 연산(vectorized operation)을 지원한다. 벡터화 연산이란 반복문(loop)을 사용하지 않고 선형 대수의 벡터 혹은 행렬 연산과 유사한 코드를 사용하는 것을 말한다.
예를 들어 다음과 같은 연산을 해야 한다고 하자.
$$
x = \begin{bmatrix}1 \ 2 \ 3 \ \vdots \ 100 \end{bmatrix}, \;\;\;\;
y = \begin{bmatrix}101 \ 102 \ 103 \ \vdots \ 200 \end{bmatrix},
$$
$$z = x + y = \begin{bmatrix}1+101 \ 2+102 \ 3+103 \ \vdots \ 100+200 \end{bmatrix}= \begin{bmatrix}102 \ 104 \ 106 \ \vdots \ 300 \end{bmatrix}
$$
만약 NumPy의 벡터화 연산을 사용하지 않는다면 이 연산은 루프를 활용하여 다음과 같이 코딩해야 한다.
End of explanation
%%time
z = x + y
z
Explanation: 그러나 NumPy는 벡터화 연산을 지원하므로 다음과 같이 덧셈 연산 하나로 끝난다. 위에서 보인 선형 대수의 벡터 기호를 사용한 연산과 코드가 완전히 동일하다.
End of explanation
x = np.arange(10)
x
a = 100
a * x
Explanation: 연산 속도도 벡터화 연산이 훨씬 빠른 것을 볼 수 있다.
Element-Wise 연산
NumPy의 벡터화 연산은 같은 위치의 원소끼리 연산하는 element-wise 연산이다. NumPy의 ndarray를 선형 대수의 벡터나 행렬이라고 했을 때 덧셈, 뺄셈은 NumPy 연산과 일치한다
스칼라와 벡터의 곱도 마찬가지로 선형 대수에서 사용하는 식과 NumPy 코드가 일치한다.
End of explanation
x = np.arange(10)
y = np.arange(10)
x * y
x
y
np.dot(x, y)
x.dot(y)
Explanation: NumPy 곱셉의 경우에는 행렬의 곱, 즉 내적(inner product, dot product)의 정의와 다르다. 따라서 이 경우에는 별도로 dot이라는 명령 혹은 메서드를 사용해야 한다.
End of explanation
a = np.array([1, 2, 3, 4])
b = np.array([4, 2, 2, 4])
a == b
a >= b
Explanation: 비교 연산도 마찬가지로 element-wise 연산이다. 따라서 벡터 혹은 행렬 전체의 원소가 모두 같아야 하는 선형 대수의 비교 연산과는 다르다.
End of explanation
a = np.array([1, 2, 3, 4])
b = np.array([4, 2, 2, 4])
c = np.array([1, 2, 3, 4])
np.array_equal(a, b)
np.array_equal(a, c)
Explanation: 만약 배열 전체를 비교하고 싶다면 array_equal 명령을 사용한다.
End of explanation
a = np.arange(5)
a
np.exp(a)
10**a
np.log(a)
np.log10(a)
Explanation: 만약 NumPy 에서 제공하는 지수 함수, 로그 함수 등의 수학 함수를 사용하면 element-wise 벡터화 연산을 지원한다.
End of explanation
import math
a = [1, 2, 3]
math.exp(a)
Explanation: 만약 NumPy에서 제공하는 함수를 사용하지 않으면 벡터화 연산은 불가능하다.
End of explanation
x = np.arange(5)
y = np.ones_like(x)
x + y
x + 1
Explanation: 브로드캐스팅
선형 대수의 행렬 덧셈 혹은 뺄셈을 하려면 두 행렬의 크기가 같아야 한다. 그러나 NumPy에서는 서로 다른 크기를 가진 두 ndarray 배열의 사칙 연산도 지원한다. 이 기능을 브로드캐스팅(broadcasting)이라고 하는데 크기가 작은 배열을 자동으로 반복 확장하여 크기가 큰 배열에 맞추는 방벙이다.
예를 들어 다음과 같이 벡터와 스칼라를 더하는 경우를 생각하자. 선형 대수에서는 이러한 연산이 불가능하다.
$$
x = \begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix}, \;\;\;\;
x + 1 = \begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} + 1 = ?
$$
그러나 NumPy는 브로드캐스팅 기능을 사용하여 스칼라를 벡터와 같은 크기로 확장시켜서 덧셈 계산을 한다.
$$
\begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} \overset{\text{numpy}}+ 1 =
\begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} + \begin{bmatrix}1 \ 1 \ 1 \ 1 \ 1 \end{bmatrix} =
\begin{bmatrix}1 \ 2 \ 3 \ 4 \ 5 \end{bmatrix}
$$
End of explanation
np.tile(np.arange(0, 40, 10), (3, 1))
a = np.tile(np.arange(0, 40, 10), (3, 1)).T
a
b = np.array([0, 1, 2])
b
a + b
a = np.arange(0, 40, 10)[:, np.newaxis]
a
a + b
Explanation: 브로드캐스팅은 더 차원이 높은 경우에도 적용된다. 다음 그림을 참조하라.
<img src="https://datascienceschool.net/upfiles/dbd3775c3b914d4e8c6bbbb342246b6a.png" style="width: 60%; margin: 0 auto 0 auto;">
End of explanation
x = np.array([1, 2, 3, 4])
x
np.sum(x)
x.sum()
x = np.array([1, 3, 2, 4])
x.min(), np.min(x)
x.max()
x.argmin() # index of minimum
x.argmax() # index of maximum
x = np.array([1, 2, 3, 1])
x.mean()
np.median(x)
np.all([True, True, False])
np.any([True, True, False])
a = np.zeros((100, 100), dtype=np.int)
a
np.any(a == 0)
np.any(a != 0)
np.all(a == 0)
a = np.array([1, 2, 3, 2])
b = np.array([2, 2, 3, 2])
c = np.array([6, 4, 4, 5])
((a <= b) & (b <= c)).all()
Explanation: 차원 축소 연산
ndarray의 하나의 행에 있는 원소를 하나의 데이터 집합으로 보고 평균을 구하면 각 행에 대해 하나의 숫자가 나오게 된다. 예를 들어 10x5 크기의 2차원 배열에 대해 행-평균을 구하면 10개의 숫자를 가진 1차원 벡터가 나오게 된다. 이러한 연산을 차원 축소(dimension reduction) 연산이라고 한다.
ndarray 는 다음과 같은 차원 축소 연산 명령 혹은 메서드를 지원한다.
최대/최소: min, max, argmin, argmax
통계: sum, mean, median, std, var
불리언: all, any
End of explanation
x = np.array([[1, 1], [2, 2]])
x
x.sum()
x.sum(axis=0) # columns (first dimension)
x.sum(axis=1) # rows (second dimension)
y = np.array([[1, 2, 3], [5, 6, 1]])
np.median(y, axis=-1) # last axis
y
np.median(y, axis=1)
Explanation: 연산의 대상이 2차원 이상인 경우에는 어느 차원으로 계산을 할 지를 axis 인수를 사용하여 지시한다. axis=0인 경우는 열 연산, axis=1인 경우는 행 연산 등으로 사용한다. 디폴트 값은 0이다.
<img src="https://datascienceschool.net/upfiles/edfaf93a7f124f359343d1dcfe7f29fc.png", style="margin: 0 auto 0 auto;">
End of explanation
a = np.array([[4, 3, 5], [1, 2, 1]])
a
np.sort(a)
np.sort(a, axis=1)
np.sort(a, axis=0)
Explanation: 정렬
sort 명령이나 메서드를 사용하여 배열 안의 원소를 크기에 따라 정렬하여 새로운 배열을 만들 수도 있다. 2차원 이상인 경우에는 마찬가지로 axis 인수를 사용하여 방향을 결정한다.
End of explanation
a
a.sort(axis=1)
a
Explanation: sort 메서드는 해당 객체의 자료 자체가 변화하는 in-place 메서드이므로 사용할 때 주의를 기울여야 한다.
End of explanation
a = np.array([4, 3, 1, 2])
j = np.argsort(a)
j
a[j]
Explanation: 만약 자료를 정렬하는 것이 아니라 순서만 알고 싶다면 argsort 명령을 사용한다.
End of explanation |
8,459 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Find collocations with typhon
Step1: Collocations between two data arrays
Let's try out the simplest case
Step2: Now, let’s find all measurements of primary that have a maximum distance of 300 kilometers to the measurements of secondary
Step3: The obtained collocations dataset contains variables of 3 groups
Step4: We can also add a temporal filter that filters out all points which difference in time is bigger than a time interval. We are doing this by using max_interval. Note that our testdata is sampled very sparsely in time.
Step5: As mentioned in
Step6: Applying collapse to the collocations will calculate some generic statistics (mean, std, count) over the datapoints that match with a single data point of the other dataset.
Step7: Purely temporal collocations are not implemented yet and attempts will raise a NotImplementedError.
Find collocations between two filesets
Normally, one has the data stored in a set of many files. typhon provides an object to handle those filesets (see the typhon doc). It is very simple to find collocations between them.
First, we need to create FileSet objects and let them know where to find their files
Step8: Now, we can search for collocations between a_dataset and b_dataset and store them to ab_collocations. | Python Code:
import cartopy.crs as projections
import numpy as np
import matplotlib.pyplot as plt
from datetime import timedelta
import xarray as xr
from typhon.plots import worldmap
from typhon.collocations import Collocator, expand, collapse
from typhon.files import FileSet, NetCDF4
from typhon.collocations import Collocations
Explanation: Find collocations with typhon
End of explanation
# Create the data
primary = xr.Dataset(
coords={
"lat": (('along_track'), 30.*np.sin(np.linspace(-3.14, 3.14, 24))+20),
"lon": (('along_track'), np.linspace(0, 90, 24)),
"time": (('along_track'), np.arange("2018-01-01", "2018-01-02", dtype="datetime64[h]")),
},
data_vars={
"Temperature": (("along_track"), np.random.normal(290, 5, (24))),
}
)
secondary = xr.Dataset(
coords={
"lat": (('along_track'), 30.*np.sin(np.linspace(-3.14, 3.14, 24)+1.)+20),
"lon": (('along_track'), np.linspace(0, 90, 24)),
"time": (('along_track'), np.arange("2018-01-01", "2018-01-02", dtype="datetime64[h]")),
},
data_vars={
"Temperature": (("along_track"), np.random.normal(290, 5, (24))),
}
)
# Plot the data
fig = plt.figure(figsize=(10, 10))
wmap = worldmap(primary["lat"], primary["lon"], s=24, bg=True)
worldmap(secondary["lat"], secondary["lon"], s=24, ax=wmap.axes,)
Explanation: Collocations between two data arrays
Let's try out the simplest case: You have two xarray datasets with
temporal-spatial data and you want to find collocations between them.
At first, we create two example xarray datasets with faked measurements. Let's
assume, these data arrays represent measurements from two different instruments
(e.g. on satellites). Each measurement has a time attribute indicating when
it was taken and a geo-location (latitude and longitude) indicating where
this happened. Note that the lat and lon variables must share their first
dimension with the time coordinate.
End of explanation
collocator = Collocator(name='primary_secondary_collocator')
collocations = collocator.collocate(
primary=('primary', primary),
secondary=('secondary', secondary),
max_distance=600, # collocation radius in km
)
print(f'Found collocations are {collocations["Collocations/distance"].values} km apart')
collocations
Explanation: Now, let’s find all measurements of primary that have a maximum distance of 300 kilometers to the measurements of secondary:
End of explanation
def collocations_wmap(collocations):
fig = plt.figure(figsize=(10, 10))
# Plot the collocations
wmap = worldmap(
collocations['primary/lat'],
collocations['primary/lon'],
facecolor="r", s=128, marker='x', bg=True
)
worldmap(
collocations['secondary/lat'],
collocations['secondary/lon'],
facecolor="r", s=128, marker='x', bg=True, ax=wmap.axes
)
# Plot all points:
worldmap(primary["lat"], primary["lon"], s=24, ax=wmap.axes,)
worldmap(secondary["lat"], secondary["lon"], s=24, ax=wmap.axes,)
wmap.axes.set(ylim=[-15, 55], xlim=[-10, 100])
collocations_wmap(collocations)
Explanation: The obtained collocations dataset contains variables of 3 groups: primary, secondary and Collocations.
The first two correspond to the variables of the two respective input datasets and contain only the matched
data points. The Collocations group adds some new variables containing information about the collocations, e.g.
the temporal and spatial distances. Additional information can be found in the typhon documentation
Let’s mark the collocations with red crosses on the map:
End of explanation
collocations = collocator.collocate(
primary=('primary', primary),
secondary=('secondary', secondary),
max_distance=300, # collocation radius in km
max_interval=timedelta(hours=1), # temporal collocation interval as timedelta
)
print(
f'Found collocations are {collocations["Collocations/distance"].values} km apart in space '
f'and {collocations["Collocations/interval"].values} hours apart in time.'
)
collocations_wmap(collocations)
Explanation: We can also add a temporal filter that filters out all points which difference in time is bigger than a time interval. We are doing this by using max_interval. Note that our testdata is sampled very sparsely in time.
End of explanation
expand(collocations)
Explanation: As mentioned in :func:collocate, the collocations are returned in compact format, e.g. an efficient way to store the collocated data. When several data points in the secondary group collocate with a single observation of the primary group, it is not obvious how this should be handled. The compact format accounts for this by introducing the Collocations/pairs variable, which contains the respective indices of the collocated datapoints. This might not be the most practical solution
In practice, the two functions expand and collapse offer two convenient ways to handle this.
Applying expand to the collocations will repeat data points for cases where one datapoint matches with several data points of the other dataset.
End of explanation
collapse(collocations)
Explanation: Applying collapse to the collocations will calculate some generic statistics (mean, std, count) over the datapoints that match with a single data point of the other dataset.
End of explanation
fh = NetCDF4()
fh.write(secondary, 'testdata/secondary/2018/01/01/000000-235959.nc')
# Create the filesets objects and point them to the input files
a_fileset = FileSet(
name="primary",
path="testdata/primary/{year}/{month}/{day}/"
"{hour}{minute}{second}-{end_hour}{end_minute}{end_second}.nc",
# handler=handlers.NetCDF4,
)
b_fileset = FileSet(
name="secondary",
path="testdata/secondary/{year}/{month}/{day}/"
"{hour}{minute}{second}-{end_hour}{end_minute}{end_second}.nc",
# handler=handlers.NetCDF4,
)
Explanation: Purely temporal collocations are not implemented yet and attempts will raise a NotImplementedError.
Find collocations between two filesets
Normally, one has the data stored in a set of many files. typhon provides an object to handle those filesets (see the typhon doc). It is very simple to find collocations between them.
First, we need to create FileSet objects and let them know where to find their files:
End of explanation
# Create the output dataset:
ab_collocations = Collocations(
name="ab_collocations",
path="testdata/ab_collocations/{year}/{month}/{day}/"
"{hour}{minute}{second}-{end_hour}{end_minute}{end_second}.nc",
)
ab_collocations.search(
[a_fileset, b_fileset], start="2018", end="2018-01-02",
max_interval=timedelta(hours=1), max_distance=300,
)
fh.read('testdata/primary/2018/01/01/000000-235959.nc')
Explanation: Now, we can search for collocations between a_dataset and b_dataset and store them to ab_collocations.
End of explanation |
8,460 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Searching for products within the datacube
In order to know what kinds of products are available for analysis, the datacube provides a function that will query the database and return a list of all the available products that are indexed with the database.
Importing the datacube
To start with, we'll import the datacube module and load an instance of the datacube and call our application name list-available-products-example.
Step1: The next step is to ask the datacube to list the available products via the call dc.list_products
Step2: The returned list is printed directly to the screen. The column name contains the product names that are used when quering the datacube using the load function. This will be covered in more detail in the loading_data notebook. | Python Code:
import datacube
dc = datacube.Datacube(app='list-available-products-example')
Explanation: Searching for products within the datacube
In order to know what kinds of products are available for analysis, the datacube provides a function that will query the database and return a list of all the available products that are indexed with the database.
Importing the datacube
To start with, we'll import the datacube module and load an instance of the datacube and call our application name list-available-products-example.
End of explanation
dc.list_products()
Explanation: The next step is to ask the datacube to list the available products via the call dc.list_products
End of explanation
data = dc.load(product='bom_rainfall_grids', x=(149.0, 150.0), y=(-36.0, -37.0),
time=('2000-01-01', '2001-01-01'))
data
Explanation: The returned list is printed directly to the screen. The column name contains the product names that are used when quering the datacube using the load function. This will be covered in more detail in the loading_data notebook.
End of explanation |
8,461 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use.
Step3: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
Step4: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step5: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
Step6: Write out the graph for TensorBoard
Step7: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Step8: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_layers"):
cell = tf.contrib.rnn.MultiRNNCell([
tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicLSTMCell(lstm_size), output_keep_prob=keep_prob) for _ in range(num_layers)
])
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
Explanation: Write out the graph for TensorBoard
End of explanation
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation |
8,462 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sequence Modeling with EDeN
The case for real valued vector labels
Aim
Step1: Artificial data generation
Step2: Discriminative model on categorical labels
Step3: Note
Step4: Model Auto Optimization | Python Code:
#code for making artificial dataset
import random
def swap_two_characters(seq):
'''define a function that swaps two characters at random positions in a string '''
line = list(seq)
id_i = random.randint(0,len(line)-1)
id_j = random.randint(0,len(line)-1)
line[id_i], line[id_j] = line[id_j], line[id_i]
return ''.join(line)
def swap_characters(seed, n):
seq=seed
for i in range(n):
seq = swap_two_characters(seq)
return seq
def make_seed(start=0, end=26):
seq = ''.join([str(unichr(97+i)) for i in range(start,end)])
return swap_characters(seq, end-start)
def make_dataset(n_sequences=None, seed=None, n_swaps=None):
seqs = []
seqs.append( seed )
for i in range(n_sequences):
seq = swap_characters( seed, n_swaps )
seqs.append( seq )
return seqs
def random_capitalize(seqs, p=0.5):
new_seqs=[]
for seq in seqs:
new_seq = [c.upper() if random.random() < p else c for c in seq ]
new_seqs.append(''.join(new_seq))
return new_seqs
def make_artificial_dataset(sequence_length=None, n_sequences=None, n_swaps=None):
seed = make_seed(start=0, end=sequence_length)
print 'Seed: ',seed
seqs = make_dataset(n_sequences=n_sequences, seed=seed, n_swaps=n_swaps)
train_seqs_orig=seqs[:len(seqs)/2]
test_seqs_orig=seqs[len(seqs)/2:]
seqs = random_capitalize(seqs, p=0.5)
print 'Sample with random capitalization:',seqs[:7]
train_seqs=seqs[:len(seqs)/2]
test_seqs=seqs[len(seqs)/2:]
return train_seqs_orig, test_seqs_orig, train_seqs, test_seqs
#code to estimate predictive performance on categorical labeled sequences
def discriminative_estimate(train_pos_seqs, train_neg_seqs, test_pos_seqs, test_neg_seqs):
from eden.graph import Vectorizer
vectorizer = Vectorizer(complexity=complexity)
from eden.converter.graph.sequence import sequence_to_eden
iterable_pos = sequence_to_eden(train_pos_seqs)
iterable_neg = sequence_to_eden(train_neg_seqs)
from eden.util import fit, estimate
estimator = fit(iterable_pos,iterable_neg, vectorizer, n_iter_search=n_iter_search)
from eden.converter.graph.sequence import sequence_to_eden
iterable_pos = sequence_to_eden(test_pos_seqs)
iterable_neg = sequence_to_eden(test_neg_seqs)
estimate(iterable_pos, iterable_neg, estimator, vectorizer)
#code to create real vector labels
def make_encoding(encoding_vector_dimension=3, sequence_length=None, noise_size=0.01):
#vector encoding for chars
default_encoding = [0]*encoding_vector_dimension
start=0
end=sequence_length
#take a list of all chars up to 'length'
char_list = [str(unichr(97+i)) for i in range(start,end)]
encodings={}
import numpy as np
codes = np.random.rand(len(char_list),encoding_vector_dimension)
for i, code in enumerate(codes):
c = str(unichr(97+i))
cc = c.upper()
encoding = list(code)
encodings[c] = encoding
#add noise for the encoding of capitalized chars
noise = np.random.rand(encoding_vector_dimension)*noise_size
encodings[cc] = list(code + noise)
return encodings, default_encoding
def make_encodings(n_encodings=3, encoding_vector_dimension=3, sequence_length=None, noise_size=0.01):
encodings=[]
for i in range(1,n_encodings+1):
encoding, default_encoding = make_encoding(encoding_vector_dimension, sequence_length, noise_size=noise_size)
encodings.append(encoding)
return encodings, default_encoding
Explanation: Sequence Modeling with EDeN
The case for real valued vector labels
Aim: Suppose you are given two sets of sequences. Each sequence is composed of characters in a finite alphabet. However there are similarity relationships between the characters. We want to build a predictive model that can discriminate between the two sets.
Artificial Dataset
Lets build an artificial case. We construct two classes in the following way: for each class we start from a specific but random seed sequence, and the full set is then generated every time by permuting the position of k pairs of characters chosen at random in the seed sequence.
To simulate the relationship between characters we do as follows: we select at random some charaters and we capitalize them. For the machine, a capitalized character is completely different from its lowercase counterpart, but it is easier for humans to see them.
Assume the similarity between chars is given as a symmetric matrix. We can then perform a low dimensionality embedding of the similarity matrix (e.g. MDS in $\mathbb{R}^4$) and obtain some vector representation for each char such that their euclidean distance is proportional to their dissimilarity. Lets assume we are already given the vector representation. In our case we just take some random vectors as they will be roughly equally distant from each other. In order to simulate that the capitalized version of a cahr should be similar to its lowercase counterpart, we just add a small amount of noise to the vector representation of one of the two.
Auxiliary Code
End of explanation
from eden.util import configure_logging
import logging
configure_logging(logging.getLogger(),verbosity=2)
#problem parameters
random.seed(1)
sequence_length = 8 #sequences length
n_sequences = 50 #num sequences in positive and negative set
n_swaps = 2 #num pairs of chars that are swapped at random
n_iter_search = 30 #num paramter configurations that are evaluated in hyperparameter optimization
complexity = 2 #feature complexity for the vectorizer
n_encodings = 5 #num vector encoding schemes for chars
encoding_vector_dimension = 9 #vector dimension for char encoding
noise_size = 0.05 #amount of random noise
print 'Positive examples:'
train_pos_seqs_orig, test_pos_seqs_orig, train_pos_seqs, test_pos_seqs = make_artificial_dataset(sequence_length,n_sequences,n_swaps)
print 'Negative examples:'
train_neg_seqs_orig, test_neg_seqs_orig, train_neg_seqs, test_neg_seqs = make_artificial_dataset(sequence_length,n_sequences,n_swaps)
Explanation: Artificial data generation
End of explanation
%%time
#lets estimate the predictive performance of a classifier over the original sequences
print 'Predictive performance on original sequences'
discriminative_estimate(train_pos_seqs_orig, train_neg_seqs_orig, test_pos_seqs_orig, test_neg_seqs_orig)
print '\n\n'
#lets estimate the predictive performance of a classifier over the capitalized sequences
print 'Predictive performance on sequences with random capitalization'
discriminative_estimate(train_pos_seqs, train_neg_seqs, test_pos_seqs, test_neg_seqs)
Explanation: Discriminative model on categorical labels
End of explanation
#lets make a vector encoding for the chars simply using a random encoding
#and a small amount of noise for the capitalized versions
#we can generate a few encodings and let the algorithm choose the best one.
encodings, default_encoding = make_encodings(n_encodings, encoding_vector_dimension, sequence_length, noise_size)
#lets define the 3 main machines: 1) pre_processor, 2) vectorizer, 3) estimator
#the pre_processor takes the raw format and makes graphs
def pre_processor( seqs, encoding=None, default_encoding=None, **args ):
#convert sequences to path graphs
from eden.converter.graph.sequence import sequence_to_eden
graphs = sequence_to_eden(seqs)
#relabel nodes with corresponding vector encoding
from eden.modifier.graph.vertex_attributes import translate
graphs = translate(graphs, label_map = encoding, default = default_encoding)
return graphs
#the vectorizer takes graphs and makes sparse vectors
from eden.graph import Vectorizer
vectorizer = Vectorizer()
#the estimator takes a sparse data matrix and a target column vector and makes a predictive model
from sklearn.linear_model import SGDClassifier
estimator = SGDClassifier(class_weight='auto', shuffle=True)
#the model takes a pre_processor, a vectorizer, an estimator and returns the predictive model
from eden.model import ActiveLearningBinaryClassificationModel
model = ActiveLearningBinaryClassificationModel(pre_processor=pre_processor,
estimator=estimator,
vectorizer=vectorizer,
fit_vectorizer=True )
#lets define hyper-parameters vaule ranges
from numpy.random import randint
from numpy.random import uniform
pre_processor_parameters={'encoding':encodings, 'default_encoding':[default_encoding]}
vectorizer_parameters={'complexity':[complexity],
'n':randint(3, 20, size=n_iter_search)}
estimator_parameters={'n_iter':randint(5, 100, size=n_iter_search),
'penalty':['l1','l2','elasticnet'],
'l1_ratio':uniform(0.1,0.9, size=n_iter_search),
'loss':['hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron'],
'power_t':uniform(0.1, size=n_iter_search),
'alpha': [10**x for x in range(-8,0)],
'eta0': [10**x for x in range(-4,-1)],
'learning_rate': ["invscaling", "constant", "optimal"]}
Explanation: Note: as expected the capitalization makes the predicitve task harder since it expands the vocabulary size and adds variations that look random
Discriminative model on real valued vector labels
End of explanation
%%time
#optimize hyperparameters and fit a predictive model
#determine optimal parameter configuration
model.optimize(train_pos_seqs, train_neg_seqs,
model_name='my_seq.model',
n_active_learning_iterations=0,
n_iter=n_iter_search, cv=3,
pre_processor_parameters=pre_processor_parameters,
vectorizer_parameters=vectorizer_parameters,
estimator_parameters=estimator_parameters)
#print optimal parameter configuration
print model.get_parameters()
#evaluate predictive performance
apr, roc = model.estimate(test_pos_seqs, test_neg_seqs)
Explanation: Model Auto Optimization
End of explanation |
8,463 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
10 - Ensemble Methods - Continuation
by Alejandro Correa Bahnsen
version 0.2, May 2016
Part of the class Machine Learning for Security Informatics
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Kevin Markham
Why are we learning about ensembling?
Very popular method for improving the predictive performance of machine learning models
Provides a foundation for understanding more sophisticated models
Part 1
Step1: Create 100 decision trees
Step2: Predict using majority voting
Step3: Using majority voting with sklearn
Step4: Part 2
Step5: Estimate the oob error of each classifier
Step6: Estimate $\alpha$
Step7: Using Weighted voting with sklearn
Step8: Part 3
Step9: Using sklearn
Step10: vs using only one dt
Step11: Part 4
Step12: Adaboost
AdaBoost (adaptive boosting) is an ensemble learning algorithm that can be used for classification or regression. Although AdaBoost is more resistant to overfitting than many machine learning algorithms, it is often sensitive to noisy data and outliers.
AdaBoost is called adaptive because it uses multiple iterations to generate a single composite strong learner. AdaBoost creates the strong learner (a classifier that is well-correlated to the true classifier) by iteratively adding weak learners (a classifier that is only slightly correlated to the true classifier). During each round of training, a new weak learner is added to the ensemble and a weighting vector is adjusted to focus on examples that were misclassified in previous rounds. The result is a classifier that has higher accuracy than the weak learners’ classifiers.
Algorithm
Step13: Train the classifier
Step14: Estimate error
Step15: Update weights
Step16: Normalize weights
Step17: Iteration 2 - n_estimators
Step18: Create classification
Only classifiers when error < 0.5
Step19: Using sklearn
Step20: Gradient Boosting | Python Code:
# read in and prepare the chrun data
# Download the dataset
import pandas as pd
import numpy as np
data = pd.read_csv('../datasets/churn.csv')
# Create X and y
# Select only the numeric features
X = data.iloc[:, [1,2,6,7,8,9,10]].astype(np.float)
# Convert bools to floats
X = X.join((data.iloc[:, [4,5]] == 'no').astype(np.float))
y = (data.iloc[:, -1] == 'True.').astype(np.int)
X.head()
y.value_counts().to_frame('count').assign(percentage = lambda x: x/x.sum())
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
Explanation: 10 - Ensemble Methods - Continuation
by Alejandro Correa Bahnsen
version 0.2, May 2016
Part of the class Machine Learning for Security Informatics
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Kevin Markham
Why are we learning about ensembling?
Very popular method for improving the predictive performance of machine learning models
Provides a foundation for understanding more sophisticated models
Part 1: Combination of classifiers - Majority Voting
The most typical form of an ensemble is made by combining $T$ different base classifiers.
Each base classifier $M(\mathcal{S}j)$ is trained by applying algorithm $M$ to a random subset
$\mathcal{S}_j$ of the training set $\mathcal{S}$.
For simplicity we define $M_j \equiv M(\mathcal{S}_j)$ for $j=1,\dots,T$, and
$\mathcal{M}={M_j}{j=1}^{T}$ a set of base classifiers.
Then, these models are combined using majority voting to create the ensemble $H$ as follows
$$
f_{mv}(\mathcal{S},\mathcal{M}) = max_{c \in {0,1}} \sum_{j=1}^T
\mathbf{1}_c(M_j(\mathcal{S})).
$$
End of explanation
n_estimators = 100
# set a seed for reproducibility
np.random.seed(123)
n_samples = X_train.shape[0]
# create bootstrap samples (will be used to select rows from the DataFrame)
samples = [np.random.choice(a=n_samples, size=n_samples, replace=True) for _ in range(n_estimators)]
from sklearn.tree import DecisionTreeClassifier
np.random.seed(123)
seeds = np.random.randint(1, 10000, size=n_estimators)
trees = {}
for i in range(n_estimators):
trees[i] = DecisionTreeClassifier(max_features="sqrt", max_depth=None, random_state=seeds[i])
trees[i].fit(X_train.iloc[samples[i]], y_train.iloc[samples[i]])
# Predict
y_pred_df = pd.DataFrame(index=X_test.index, columns=list(range(n_estimators)))
for i in range(n_estimators):
y_pred_df.ix[:, i] = trees[i].predict(X_test)
y_pred_df.head()
Explanation: Create 100 decision trees
End of explanation
y_pred_df.sum(axis=1)[:10]
y_pred = (y_pred_df.sum(axis=1) >= (n_estimators / 2)).astype(np.int)
from sklearn import metrics
metrics.f1_score(y_pred, y_test)
metrics.accuracy_score(y_pred, y_test)
Explanation: Predict using majority voting
End of explanation
from sklearn.ensemble import BaggingClassifier
clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(), n_estimators=100, bootstrap=True,
random_state=42, n_jobs=-1, oob_score=True)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Explanation: Using majority voting with sklearn
End of explanation
samples_oob = []
# show the "out-of-bag" observations for each sample
for sample in samples:
samples_oob.append(sorted(set(range(n_samples)) - set(sample)))
Explanation: Part 2: Combination of classifiers - Weighted Voting
The majority voting approach gives the same weight to each classfier regardless of the performance of each one. Why not take into account the oob performance of each classifier
First, in the traditional approach, a
similar comparison of the votes of the base classifiers is made, but giving a weight $\alpha_j$
to each classifier $M_j$ during the voting phase
$$
f_{wv}(\mathcal{S},\mathcal{M}, \alpha)
=\max_{c \in {0,1}} \sum_{j=1}^T \alpha_j \mathbf{1}c(M_j(\mathcal{S})),
$$
where $\alpha={\alpha_j}{j=1}^T$.
The calculation of $\alpha_j$ is related to the performance of each classifier $M_j$.
It is usually defined as the normalized misclassification error $\epsilon$ of the base
classifier $M_j$ in the out of bag set $\mathcal{S}j^{oob}=\mathcal{S}-\mathcal{S}_j$
\begin{equation}
\alpha_j=\frac{1-\epsilon(M_j(\mathcal{S}_j^{oob}))}{\sum{j_1=1}^T
1-\epsilon(M_{j_1}(\mathcal{S}_{j_1}^{oob}))}.
\end{equation}
Select each oob sample
End of explanation
errors = np.zeros(n_estimators)
for i in range(n_estimators):
y_pred_ = trees[i].predict(X_train.iloc[samples_oob[i]])
errors[i] = 1 - metrics.accuracy_score(y_train.iloc[samples_oob[i]], y_pred_)
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
plt.scatter(range(n_estimators), errors)
plt.xlim([0, n_estimators])
plt.title('OOB error of each tree')
Explanation: Estimate the oob error of each classifier
End of explanation
alpha = (1 - errors) / (1 - errors).sum()
weighted_sum_1 = ((y_pred_df) * alpha).sum(axis=1)
weighted_sum_1.head(20)
y_pred = (weighted_sum_1 >= 0.5).astype(np.int)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Explanation: Estimate $\alpha$
End of explanation
clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(), n_estimators=100, bootstrap=True,
random_state=42, n_jobs=-1, oob_score=True)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
errors = np.zeros(clf.n_estimators)
y_pred_all_ = np.zeros((X_test.shape[0], clf.n_estimators))
for i in range(clf.n_estimators):
oob_sample = ~clf.estimators_samples_[i]
y_pred_ = clf.estimators_[i].predict(X_train.values[oob_sample])
errors[i] = metrics.accuracy_score(y_pred_, y_train.values[oob_sample])
y_pred_all_[:, i] = clf.estimators_[i].predict(X_test)
alpha = (1 - errors) / (1 - errors).sum()
y_pred = (np.sum(y_pred_all_ * alpha, axis=1) >= 0.5).astype(np.int)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Explanation: Using Weighted voting with sklearn
End of explanation
X_train_2 = pd.DataFrame(index=X_train.index, columns=list(range(n_estimators)))
for i in range(n_estimators):
X_train_2[i] = trees[i].predict(X_train)
X_train_2.head()
from sklearn.linear_model import LogisticRegressionCV
lr = LogisticRegressionCV()
lr.fit(X_train_2, y_train)
lr.coef_
y_pred = lr.predict(y_pred_df)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Explanation: Part 3: Combination of classifiers - Stacking
The staking method consists in combining the different base classifiers by learning a
second level algorithm on top of them. In this framework, once the base
classifiers are constructed using the training set $\mathcal{S}$, a new set is constructed
where the output of the base classifiers are now considered as the features while keeping the
class labels.
Even though there is no restriction on which algorithm can be used as a second level learner,
it is common to use a linear model, such as
$$
f_s(\mathcal{S},\mathcal{M},\beta) =
g \left( \sum_{j=1}^T \beta_j M_j(\mathcal{S}) \right),
$$
where $\beta={\beta_j}_{j=1}^T$, and $g(\cdot)$ is the sign function
$g(z)=sign(z)$ in the case of a linear regression or the sigmoid function, defined
as $g(z)=1/(1+e^{-z})$, in the case of a logistic regression.
Lets first get a new training set consisting of the output of every classifier
End of explanation
y_pred_all_ = np.zeros((X_test.shape[0], clf.n_estimators))
X_train_3 = np.zeros((X_train.shape[0], clf.n_estimators))
for i in range(clf.n_estimators):
X_train_3[:, i] = clf.estimators_[i].predict(X_train)
y_pred_all_[:, i] = clf.estimators_[i].predict(X_test)
lr = LogisticRegressionCV()
lr.fit(X_train_3, y_train)
y_pred = lr.predict(y_pred_all_)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Explanation: Using sklearn
End of explanation
dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
y_pred = dt.predict(X_test)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
Explanation: vs using only one dt
End of explanation
from IPython.display import Image
Image(url= "http://vision.cs.chubu.ac.jp/wp/wp-content/uploads/2013/07/OurMethodv81.png", width=900)
Explanation: Part 4: Boosting
While boosting is not algorithmically constrained, most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are typically weighted in some way that is usually related to the weak learners' accuracy. After a weak learner is added, the data is reweighted: examples that are misclassified gain weight and examples that are classified correctly lose weight (some boosting algorithms actually decrease the weight of repeatedly misclassified examples, e.g., boost by majority and BrownBoost). Thus, future weak learners focus more on the examples that previous weak learners misclassified. (Wikipedia)
End of explanation
# read in and prepare the chrun data
# Download the dataset
import pandas as pd
import numpy as np
data = pd.read_csv('../datasets/churn.csv')
# Create X and y
# Select only the numeric features
X = data.iloc[:, [1,2,6,7,8,9,10]].astype(np.float)
# Convert bools to floats
X = X.join((data.iloc[:, [4,5]] == 'no').astype(np.float))
y = (data.iloc[:, -1] == 'True.').astype(np.int)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
n_samples = X_train.shape[0]
n_estimators = 10
weights = pd.DataFrame(index=X_train.index, columns=list(range(n_estimators)))
t = 0
weights[t] = 1 / n_samples
Explanation: Adaboost
AdaBoost (adaptive boosting) is an ensemble learning algorithm that can be used for classification or regression. Although AdaBoost is more resistant to overfitting than many machine learning algorithms, it is often sensitive to noisy data and outliers.
AdaBoost is called adaptive because it uses multiple iterations to generate a single composite strong learner. AdaBoost creates the strong learner (a classifier that is well-correlated to the true classifier) by iteratively adding weak learners (a classifier that is only slightly correlated to the true classifier). During each round of training, a new weak learner is added to the ensemble and a weighting vector is adjusted to focus on examples that were misclassified in previous rounds. The result is a classifier that has higher accuracy than the weak learners’ classifiers.
Algorithm:
Initialize all weights ($w_i$) to 1 / n_samples
Train a classifier $h_t$ using weights
Estimate training error $e_t$
set $alpha_t = log\left(\frac{1-e_t}{e_t}\right)$
Update weights
$$w_i^{t+1} = w_i^{t}e^{\left(\alpha_t \mathbf{I}\left(y_i \ne h_t(x_t)\right)\right)}$$
Repeat while $e_t<0.5$ and $t<T$
End of explanation
from sklearn.tree import DecisionTreeClassifier
trees = []
trees.append(DecisionTreeClassifier(max_depth=1))
trees[t].fit(X_train, y_train, sample_weight=weights[t].values)
Explanation: Train the classifier
End of explanation
y_pred_ = trees[t].predict(X_train)
error = []
error.append(1 - metrics.accuracy_score(y_pred_, y_train))
error[t]
alpha = []
alpha.append(np.log((1 - error[t]) / error[t]))
alpha[t]
Explanation: Estimate error
End of explanation
weights[t + 1] = weights[t]
filter_ = y_pred_ != y_train
weights.loc[filter_, t + 1] = weights.loc[filter_, t] * np.exp(alpha[t])
Explanation: Update weights
End of explanation
weights[t + 1] = weights[t + 1] / weights[t + 1].sum()
Explanation: Normalize weights
End of explanation
for t in range(1, n_estimators):
trees.append(DecisionTreeClassifier(max_depth=1))
trees[t].fit(X_train, y_train, sample_weight=weights[t].values)
y_pred_ = trees[t].predict(X_train)
error.append(1 - metrics.accuracy_score(y_pred_, y_train))
alpha.append(np.log((1 - error[t]) / error[t]))
weights[t + 1] = weights[t]
filter_ = y_pred_ != y_train
weights.loc[filter_, t + 1] = weights.loc[filter_, t] * np.exp(alpha[t])
weights[t + 1] = weights[t + 1] / weights[t + 1].sum()
error
Explanation: Iteration 2 - n_estimators
End of explanation
new_n_estimators = np.sum([x<0.5 for x in error])
y_pred_all = np.zeros((X_test.shape[0], new_n_estimators))
for t in range(new_n_estimators):
y_pred_all[:, t] = trees[t].predict(X_test)
y_pred = (np.sum(y_pred_all * alpha[:new_n_estimators], axis=1) >= 1).astype(np.int)
metrics.f1_score(y_pred, y_test.values), metrics.accuracy_score(y_pred, y_test.values)
Explanation: Create classification
Only classifiers when error < 0.5
End of explanation
from sklearn.ensemble import AdaBoostClassifier
clf = AdaBoostClassifier()
clf
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.f1_score(y_pred, y_test.values), metrics.accuracy_score(y_pred, y_test.values)
Explanation: Using sklearn
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
clf = GradientBoostingClassifier()
clf
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.f1_score(y_pred, y_test.values), metrics.accuracy_score(y_pred, y_test.values)
Explanation: Gradient Boosting
End of explanation |
8,464 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Express Deep Learning in Python - Examples
We will run a couple of examples to see how different parameters affect the performance of the classifier.
Step1: Convolutional 1
Step2: Convolutional 2 | Python Code:
import numpy
import keras
import os
from keras import backend as K
from keras import losses, optimizers, regularizers
from keras.datasets import mnist
from keras.layers import Activation, ActivityRegularization, Conv2D, Dense, Dropout, Flatten, MaxPooling2D
from keras.models import Sequential
from keras.utils.np_utils import to_categorical
from keras.callbacks import TensorBoard
batch_size = 128
num_classes = 10
epochs = 10
TRAIN_EXAMPLES = 60000
TEST_EXAMPLES = 10000
# image dimensions
img_rows, img_cols = 28, 28
# load the data (already shuffled and splitted)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# reshape the data to add the "channels" dimension
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
# normalize the input in the range [0, 1]
# to make quick runs, select a smaller set of images.
train_mask = numpy.random.choice(x_train.shape[0], TRAIN_EXAMPLES, replace=False)
x_train = x_train[train_mask, :].astype('float32')
y_train = y_train[train_mask]
test_mask = numpy.random.choice(x_test.shape[0], TEST_EXAMPLES, replace=False)
x_test = x_test[test_mask, :].astype('float32')
y_test = y_test[test_mask]
x_train /= 255
x_test /= 255
print('Train samples: %d' % x_train.shape[0])
print('Test samples: %d' % x_test.shape[0])
# convert class vectors to binary class matrices
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
Explanation: Express Deep Learning in Python - Examples
We will run a couple of examples to see how different parameters affect the performance of the classifier.
End of explanation
EXPERIMENT_COUNTER = 4
def write_summary(filename, model):
with open(filename, 'w') as log_file:
model.summary(print_fn=lambda x: log_file.write(x + '\n'))
def evaluate_model(model, experiment_name=EXPERIMENT_COUNTER):
# train the model
logs_dirname = './logs/experiment-{}'.format(experiment_name)
tensorboard = TensorBoard(log_dir=logs_dirname, histogram_freq=0,
write_graph=False, write_images=False)
epochs = 20
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test),
callbacks=[tensorboard])
# TIP: write the model summary to keep track of your experiments
write_summary(os.path.join(logs_dirname, 'model-summary.txt'), model)
# evaluate the model
return model.evaluate(x_test, y_test, verbose=0)
# define the network architecture
model = Sequential()
model.add(Conv2D(filters=16,
kernel_size=(3, 3),
strides=(1,1),
padding='valid',
activation='relu',
input_shape=input_shape,
activity_regularizer='l2'))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# compile the model
model.compile(loss=losses.categorical_crossentropy,
optimizer=optimizers.RMSprop(),
metrics=['accuracy', 'mae'])
evaluate_model(model)
EXPERIMENT_COUNTER += 1
Explanation: Convolutional 1
End of explanation
# define the network architecture
model = Sequential()
model.add(Conv2D(filters=16,
kernel_size=(3, 3),
strides=(1,1),
padding='valid',
activation='sigmoid',
input_shape=input_shape,
activity_regularizer='l2'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='sigmoid'))
model.add(Dropout(0.25))
model.add(Dense(num_classes, activation='softmax'))
# compile the model
model.compile(loss=losses.categorical_crossentropy,
optimizer=optimizers.RMSprop(),
metrics=['accuracy', 'mae'])
evaluate_model(model)
EXPERIMENT_COUNTER += 1
Explanation: Convolutional 2
End of explanation |
8,465 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kerék odometria kibővített (EKF) Kálmán-szűrővel
Csúszás nélkül gördülő kerék
A mozgás összefüggéseinek felírása
A munkafüzet (Kalman1.ipnb) és a hozzá tartozó állományok (./img/*, ./dat/a.txt) interaktívan futtathatók, módosíthatók a tmpnb.org-ra feltöltés után. Vigyázat! Ott 10 perc inaktivitás után törlődnek az állományok, ezért a változtatásokat célszerű lementeni.
GitHub repo, ahonnan letölthetők a szükséges állományok (repo)
Kalman1.ipnb
./img/*.png
./dat/a.txt
A csúszás nélkül gördülő kerék mozgása egyenes haladásból és forgásból áll. A pillanatnyi érintkezési pont zérus sebességű, ahogy az ábráról látható
Step2: Az állapot átmenet $A$ mátrixa már linearizálva van
Step3: Kibővített Kálmán-szűrő (Extended Kalman Filter, EKF)
A kibővített szűrő a nemlineáris mérési egyenlet Jacobi mátrixának felírásával valósítható meg. A mérési egyenlet $H$ Jacobi mátrixa
$$ H = \begin{bmatrix}
-\frac{g}{r_w}\cos\left(\frac{p}{r_w}\right)-\frac{\ddot{p}}{r_w}\sin\left(\frac{p}{r_w}\right) & 0 & \cos\left(\frac{p}{r_w}\right)-\frac{r_s}{r_w}\
\frac{g}{r_w}\sin\left(\frac{p}{r_w}\right)-\frac{\ddot{p}}{r_w}\cos\left(\frac{p}{r_w}\right) & -2\frac{r_s}{r_w^2}\dot{p} & -\sin\left(\frac{p}{r_w}\right)
\end{bmatrix} .$$
A sztochasztikus modellhez felvett állapot zaj variancia $q=0.07 m/s^2$, a mért gyorsulások hiba varianciája $r_1=r_2=5 m/s^2$.
Az alábbi Python függvény számítja ki a $H$ mátrixot
Step4: Teszt adatok
A kibővített Kálmán-szűrést Gersdorf B., Frese U. (2013) cikkéből vett gyorsulás adatokkal teszteljük.
Step6: A kibővített Kálmán-szűrés egy lépésében kiszámítjuk a rendszerállapot predikcióját majd mindig az újabb $\mathbf{z}_k$ méréssel frissítjük az állapot becslését.
predikció
Step8: A számítást az EKFWheel(t,a1,a2,q,r) függvényben programoztuk. A függvény paraméterei a következők
Step9: Ezek után elvégezhetjük a szűrést
Step10: Végül kirajzoltatjuk a kibővített Kálmán-szűrővel kapott eredményeket | Python Code:
def h(x,rs,rw):
## mérési egyenlet függvénye
## x = állapot vektor (p,pdot,pdotdot)
## rs = szenzor tengelytől mért távolsága
## rw = kerék sugara
g = 9.81
h1 = -g*np.sin(x[0]/rw) + x[2]*np.cos(x[0]/rw) - x[2]*rs/rw
h2 = -g*np.cos(x[0]/rw) - x[2]*np.sin(x[0]/rw) - (x[1])**2*rs/(rw**2)
return np.array([h1,h2]).flatten()
Explanation: Kerék odometria kibővített (EKF) Kálmán-szűrővel
Csúszás nélkül gördülő kerék
A mozgás összefüggéseinek felírása
A munkafüzet (Kalman1.ipnb) és a hozzá tartozó állományok (./img/*, ./dat/a.txt) interaktívan futtathatók, módosíthatók a tmpnb.org-ra feltöltés után. Vigyázat! Ott 10 perc inaktivitás után törlődnek az állományok, ezért a változtatásokat célszerű lementeni.
GitHub repo, ahonnan letölthetők a szükséges állományok (repo)
Kalman1.ipnb
./img/*.png
./dat/a.txt
A csúszás nélkül gördülő kerék mozgása egyenes haladásból és forgásból áll. A pillanatnyi érintkezési pont zérus sebességű, ahogy az ábráról látható:
<img src="./img/gordul1.png" />
Az ábrán látható $r_w$ sugarú kerék O középpontjától $r_s$ távolságban rögzített kéttengelyű S gyorsulásmérő szenzor 1. tengelye a sugárra merőlegesen a pozitív forgásirányban (óramutató járásával megegyezően), 2. tengelye a sugár irányában, az O középpont felé mutat.
<img src="./img/gordul2.png" />
A $\mathbf{g}$ nehézségi gyorsulás vektornak a szenzor tengelyirányaiba mutató összetevői:
$$a_1=-g\sin(\theta)$$
$$a_2=-g\cos(\theta)$$.
Az O középpontban $\ddot{p}$ lineáris gyorsulással haladó kerék esetében az S szenzor érzékelő tömegére ellenkező irányú gyorsulás hat az ábra szerint
<img src="./img/gordul3.png" />
melynek összetevői:
$$a_1=\ddot{p}\cos(\theta)$$
$$a_2=-\ddot{p}\sin(\theta)$$.
Végül a kerék gyorsuló forgása miatt fellép $a_1$ irányában centripetális, $a_2$ irányában centrifugális gyorsulás.
<img src="./img/gordul4.png" />
A kerék csúszás mentes gördülése miatt a $\theta$ elfordulási szög, a $\dot\theta$ szögsebesség és a $\ddot\theta$ szöggyorsulás kapcsolatban van a megtett $p$ úttal, a $\dot{p}$ haladási sebességgel és a $\ddot{p}$ lineáris gyorsulással:
$$\theta=\frac{p}{r_w}$$
$$\dot\theta=\frac{\dot{p}}{r_w}$$
$$\ddot\theta=\frac{\ddot{p}}{r_w}$$
A szenzor tengelyei irányában fellépő gyorsulások tehát
$$a_1=-r_s \ddot\theta = -\frac{r_s}{r_w}\ddot{p}$$
$$a_2=-r_s \dot\theta^2 = -\frac{r_s}{r_w^2}\dot{p}^2$$
A szenzorra ható teljes gyorsulás komponensei az előzőek alapján most már felírhatók:
$$a_1=-g\sin(\frac{p}{r_w})+\ddot{p}\cos(\frac{p}{r_w})-\frac{r_s}{r_w}\ddot{p}$$
$$a_2=-g\cos(\frac{p}{r_w})-\ddot{p}\sin(\frac{p}{r_w})-\frac{r_s}{r_w^2}\dot{p}^2$$
Állapot és mérési egyenlet
A Kálmán-szűrő egyenleteinek felírásához a rendszer $\mathbf{x}$ állapot vektorát a kerék helyzete, sebessége és gyorsulása adja
$$\mathbf{x} = \begin{bmatrix}
p\
\dot p\
\ddot p
\end{bmatrix} .$$
A $h(\mathbf{x},\mathbf{v})$ mérési függvény a mért gyorsulásokat adó nemlineáris függvény
$$ h(\mathbf{x},\mathbf{v}) = \begin{bmatrix}
-g\sin\left(\frac{p}{r_w}\right)+\ddot{p}\cos\left(\frac{p}{r_w}\right)-\frac{r_s}{r_w}\ddot{p} + v_1\
-g\cos\left(\frac{p}{r_w}\right)-\ddot{p}\sin\left(\frac{p}{r_w}\right)-\frac{r_s}{r_w^2}\dot{p}^2 + v_2
\end{bmatrix} .$$
A mérési függvény Python kódja:
End of explanation
def f(x,dt):
állapot terjedés függvénye
x = állapot vektor (p,pdot,pdotdot)
dt = időkülönbség
f1 = x[0] + x[1]*dt + 0.5*x[2]*dt**2
f2 = x[1] + x[2]*dt
f3 = x[2]
return np.array([f1,f2,f3]).flatten()
Explanation: Az állapot átmenet $A$ mátrixa már linearizálva van:
$$ A = \begin{bmatrix}
1 & \Delta t & \frac{1}{2}\Delta t^2 \
0 & 1 & \Delta t\
0 & 0 & 1
\end{bmatrix}$$
Az állapot terjedés Python függvényét tehát egyszerűen megírhatjuk:
End of explanation
def H(x,rs,rw):
## a mérési egyenlet függvényének Jacobi-mátrixa
## x = állapot vektor (p,pdot,pdotdot)
## rs = szenzor tengelytől mért távolsága
## rw = kerék sugara
g = 9.81
H = np.zeros((2,3))
H[0,0] = -g/rw*np.cos(x[0]/rw) - x[2]/rw*np.sin(x[0]/rw)
H[0,1] = 0.0
H[0,2] = np.cos(x[0]/rw)- rs/rw
H[1,0] = g/rw*np.sin(x[0]/rw) - x[2]/rw*np.cos(x[0]/rw)
H[1,1] = -2*x[1]*rs/(rw**2)
H[1,2] = -np.sin(x[0]/rw)
return H
Explanation: Kibővített Kálmán-szűrő (Extended Kalman Filter, EKF)
A kibővített szűrő a nemlineáris mérési egyenlet Jacobi mátrixának felírásával valósítható meg. A mérési egyenlet $H$ Jacobi mátrixa
$$ H = \begin{bmatrix}
-\frac{g}{r_w}\cos\left(\frac{p}{r_w}\right)-\frac{\ddot{p}}{r_w}\sin\left(\frac{p}{r_w}\right) & 0 & \cos\left(\frac{p}{r_w}\right)-\frac{r_s}{r_w}\
\frac{g}{r_w}\sin\left(\frac{p}{r_w}\right)-\frac{\ddot{p}}{r_w}\cos\left(\frac{p}{r_w}\right) & -2\frac{r_s}{r_w^2}\dot{p} & -\sin\left(\frac{p}{r_w}\right)
\end{bmatrix} .$$
A sztochasztikus modellhez felvett állapot zaj variancia $q=0.07 m/s^2$, a mért gyorsulások hiba varianciája $r_1=r_2=5 m/s^2$.
Az alábbi Python függvény számítja ki a $H$ mátrixot:
End of explanation
# -*- coding:utf-8 -*-
import numpy as np
import matplotlib.pyplot as plt
data = np.loadtxt("./dat/a.txt")
t = data[:,0]
a1= data[:,1]
a2= data[:,2]
Explanation: Teszt adatok
A kibővített Kálmán-szűrést Gersdorf B., Frese U. (2013) cikkéből vett gyorsulás adatokkal teszteljük.
End of explanation
def EKFstep(xk,Pk,A,H,Q,R,zk,dt,rs,rw):
a kibővített Kálmán-szűrő egy lépése
# állapot frissítése
xkm = f(xk,dt)
Pkm = np.dot(np.dot(A,Pk),A.T) + Q
# újabb mérés, frissítés
Kk = np.dot(Pkm,np.dot(H.T,np.linalg.pinv(np.dot(np.dot(H,Pkm),H.T) + R)))
xk1 = xkm + np.dot(Kk,(zk - h(xkm,rs,rw)))
Pk1 = Pkm - np.dot(np.dot(Kk,H),Pkm)
return xk1,Pk1
Explanation: A kibővített Kálmán-szűrés egy lépésében kiszámítjuk a rendszerállapot predikcióját majd mindig az újabb $\mathbf{z}_k$ méréssel frissítjük az állapot becslését.
predikció:
$$ \hat{\mathbf{x}}_k^-{} = f(\mathbf{x}_k)$$
$$ \mathbf{P}_k^-{} = \mathbf{A} \mathbf{P}_k \mathbf{A}^T + \mathbf{Q} $$
frissítés:
$$ \mathbf{K}_k = \mathbf{P}_k^-{} \mathbf{H}_k^T (\mathbf{H} \mathbf{P}_k^-{} \mathbf{H}^T + \mathbf{R})^{-1} $$
$$ \hat{\mathbf{x}}_k = \hat{\mathbf{x}}_k^-{} + \mathbf{K}_k (\mathbf{z}_k - h(\hat{\mathbf{x}}_k^-{}) $$
$$ \mathbf{P}_k = \mathbf{P}_k^-{} - \mathbf{K}_k \mathbf{H} \mathbf{P}_k^-{} $$
A lépést megvalósító Python függvény:
End of explanation
def EKFwheel(t,a1,a2,q=0.07,r=5):
Kibővített Kálmán-szűrő kéttengelyű gyorsulásmérő szenzoros odometriához
t - mérési időpontok vektora
a1, a2 - mért szenzor centripetális és centrifugális gyorsulások (m/s**2)
q - folyamat zaj variancia (m/s**2)
r - mérési zaj variancia (m/s**2)
# szenzor tengelytávolsága és a kerék sugara (Samsung Galaxy S2 biciklire szerelve)
rs = 0.095 # méter
rw = 0.35 # méter
nt = t.shape[0]
# állapot vektor: t, p,vel,acc,theta
xe = np.zeros((nt,5))
xe[:,0]=t
# indítás
Q = q**2*np.eye(3)
R = r**2*np.eye(2)
xk = np.zeros((3,1))
Pk = q**2*np.diag([0, 0, 1])
Ak = np.zeros((3,3))
for i in range(1,nt):
dt = t[i]-t[i-1]
Hk = H(xk,rs,rw)
A = np.eye(3)+ np.array([[0,dt,0.5*dt**2], [0,0,dt], [0,0,0]])
zk = np.array([a1[i],a2[i]])
# EKF lépés
xk1,Pk1 = EKFstep(xk,Pk,A,Hk,Q,R,zk,dt,rs,rw)
xe[i,1:4] = xk1
xk = xk1 # állapot frissítése
Pk = Pk1 # állapot kovariancia frissítése
om = xk1[0]/rw # kerék elfordulási szöge
xe[i,4]=np.arctan2(np.sin(om),np.cos(om)); # szög: (-pi, pi)
return xe
Explanation: A számítást az EKFWheel(t,a1,a2,q,r) függvényben programoztuk. A függvény paraméterei a következők:
* t: a mérési időpontok vektora
* a1, a2: a gyorsulásmérő szenzor adatai a t-vel megadott időpontokban
* q: folyamat zaj
* r: mérési zaj
A függvény az xe mátrixban adja vissza a becsült rendszer állapotot és a $-\pi \le \theta \le \pi$ közé eső szögértékeket.
A függvény Python kódja:
End of explanation
xe = EKFwheel(t,a1,a2,0.07,5.0)
Explanation: Ezek után elvégezhetjük a szűrést:
End of explanation
%matplotlib inline
plt.plot(xe[:,0],xe[:,1], 'b-', label=u'EKF távolság, m')
plt.plot(xe[:,0],xe[:,2], 'g-', label=u'EKF sebesség, m/s')
plt.plot(xe[:,0],xe[:,3], 'r-', label=u'EKF gyorsulás, m/s^2')
plt.plot(xe[:,0],xe[:,4], color='orange', label=u'elfordulás szöge, rad')
plt.xlim(1,11)
plt.grid(color='grey')
plt.title(u'Kibövített Kálmán-szűrő (EKF) eredmények', fontweight='bold')
plt.xlabel(u'idő (s)')
plt.legend(loc='upper left', shadow=False, prop={'size':8})
plt.show()
Explanation: Végül kirajzoltatjuk a kibővített Kálmán-szűrővel kapott eredményeket:
End of explanation |
8,466 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quantifying Influence of The Beatle and The Rolling Stones<br><br>
With the data exported from the MusicBrainz database, which is further cleaned and aggregated in this notebook, I have refined the set of observations around the songs recorded and released by two artists. The response by which I will measure their influence is measured by other artists recorded use of their songs, "cover versions" or "covers" in the vernacular. This response is the "times_covered"<br><br>
I also acquired the lyrics for as many of the original songs as I could using Beautiful Soup to scrape the lyrics from (lyrics.wikia.com)[http
Step1: Check coverage of lyrics for the original songs.
96.6% have lyrics (and lyric sentiment polarity score) in the dataset. Lyric sentiment is a valuable predictor.
Step2: Create base set of features for fitting to models
Step3: Build model with Random Forest Classifier (RFC)
Step4: I have chosen 6 features for the RFC model after running a looped evaluation of the maximum features for the model using cross validation.
Step5: At 92% and 90% respectively, both the out of bag and cross-validation scores are quite positive for the Random Forest Classifier.
Testing Random Forest Regressor to predict "times_covered"
Step6: I conclude that I can predict whether a song will be covered far more accurately than how many times it will be covered.<br>
90% vs. 28%
<br><br>
Test with DecisionTreeClassifier
While seeking a less opaque model than the Random Forest, I tried the DecisionTreeClassifier, to take advantage of the very nice method for ranking the importance of the features.
Step7: As shown in the bar chart below, year and lyric sentiment are better predictors of whether or not a song is covered.
Step8: While year may be important, I feel the weight of it is skewed by the results indicating the vast majority of the Beatles catalog has been covered, and well distributed across all years represented.
Let's review some simples metrics comparing the two bands.
A quick measure of songs covered by release year for both artists.
<br> While The Beatle disbanded by 1970, The Stones continue to this day. However, their early work appears far more influential, with the greatest body of influence more or less paralelling that of The more covered Beatles. Let's put a number on that, shall we?
Step9: It appears that a pretty large percentage of the artist's catalogs have been covered at least once.<br>
90% of Beatles songs have been covered, 66% of Rolling Stones. By this measure, The Liverpuddlians may be deemed "more influential".
Step10: "Top of the Pops"
Step11: Plot times covered by year.
Step12: The Beatles catalog essentialy ends in 1970 when they disbanded (the outliers are most likely bad release dates in my data). The Rolling Stones continue into this century. However, their years of greatest influence are similar, spanning the first 8 years of releases.<br><br>
Test Logistic Regression Model
Step13: Null Accuracy result is 37%.
As we will see below, when we fit the Logistic Regression estimator with our data and compute a cross validation score we improve significantly over the null test result.
Step14: I am able to predict with 80% probability that a song will be covered.<br><br>
Compute ROC curve and AUC score.
Step15: Lyric Sentiment by Artist | Python Code:
### Import as many items as possible to have available.
### Import data from CSV
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
from sklearn.naive_bayes import MultinomialNB
data = pd.read_csv('data/Influence_clean.csv', header=0,encoding= 'utf-8', delimiter='|')
data['minreleasedate'] = pd.to_datetime(pd.Series(data.minreleasedate))
data['times_covered'].fillna(0, inplace=True)
data['artistid'] = data.artist.map({'The Beatles':0, 'The Rolling Stones':1})
data['artist'] = data.artist.astype('category')
data['songname'] = data.songname.astype('category')
# Add column for year of release for simpler grouping.
data['year'] = data['minreleasedate'].apply(lambda x: x.year)
# Make binary response - song has been covered or not. Far better accuracy over "times covered".
data['is_covered'] = data.times_covered.map(lambda x: 1 if x > 0 else 0)
Explanation: Quantifying Influence of The Beatle and The Rolling Stones<br><br>
With the data exported from the MusicBrainz database, which is further cleaned and aggregated in this notebook, I have refined the set of observations around the songs recorded and released by two artists. The response by which I will measure their influence is measured by other artists recorded use of their songs, "cover versions" or "covers" in the vernacular. This response is the "times_covered"<br><br>
I also acquired the lyrics for as many of the original songs as I could using Beautiful Soup to scrape the lyrics from (lyrics.wikia.com)[http://lyrics.wikia.com). I was able to acquire lyrical content for 96.5% of the original songs to apply sentiment analysis using TextBlob. The sentiment polarity was applied separately to both the song title and the lyrics themselves to create features to augment the song/release data. As I am able to show below, the lyric sentiment is one of the more predictive measurements, second only to the year a song was released.<br><br>
With each row in my dataset - songname, artist, etc., I then merged the number of times the song was covered ("times_covered") and the number of artists covering the song ("artist_cnt"). I also created a binary response ("is_covered") as a simpler indicator that the song was used over the number of times used. This proved to be more predictable than the number of times covered. Also included is the average rating of the cover versions per song though I suspect the data is not so useful.
End of explanation
data[(data.is_cover == 0) & (data.lyrics.isnull())].workid.count().astype(float)/data[(data.is_cover == 0)].workid.count()
Explanation: Check coverage of lyrics for the original songs.
96.6% have lyrics (and lyric sentiment polarity score) in the dataset. Lyric sentiment is a valuable predictor.
End of explanation
feature_cols = [ 'year','num_releases','lyric_sent','title_sent', 'countries', 'avg_rating']
X= data[data.is_cover == 0][feature_cols]
y = data[data.is_cover == 0].is_covered
y_regress = data[data.is_cover == 0].times_covered
print X.shape
print y.shape
print y_regress.shape
Explanation: Create base set of features for fitting to models
End of explanation
feature_range = range(1, len(feature_cols)+1)
# list to store the accuracy score for each value of max_features
Acc_scores = []
# use 10-fold cross-validation with each value of max_features (WARNING: SLOW!)
for feature in feature_range:
rfclass = RandomForestClassifier(n_estimators=500, max_features=feature, random_state=50)
acc_val_scores = cross_val_score(rfclass, X, y, cv=10, scoring='accuracy')
Acc_scores.append(acc_val_scores.mean())
Explanation: Build model with Random Forest Classifier (RFC)
End of explanation
# plot max_features (x-axis) versus Accuracy score (y-axis)
plt.plot(feature_range, Acc_scores)
plt.xlabel('max_features')
plt.ylabel('Accuracy (higher is better)')
rfclass = RandomForestClassifier(n_estimators=175, max_features=6,oob_score=True, random_state=50)
rfclass.fit(X, y)
print rfclass.oob_score_
print cross_val_score(rfclass, X, y, cv=10, scoring='accuracy').mean()
Explanation: I have chosen 6 features for the RFC model after running a looped evaluation of the maximum features for the model using cross validation.
End of explanation
from sklearn.feature_selection import SelectFromModel
rfreg = RandomForestRegressor(n_estimators=100, max_features=6, random_state=111)
rfreg.fit(X,y_regress)
sfm = SelectFromModel(rfreg, threshold='mean', prefit=True)
X_important = sfm.transform(X)
print(X_important.shape[0],X_important.shape[1])
rfreg = RandomForestRegressor(n_estimators=100, max_features=3, random_state=111)
scores = cross_val_score(rfreg, X_important, y_regress, cv=10, scoring='mean_squared_error')
np.mean(np.sqrt(-scores))
Explanation: At 92% and 90% respectively, both the out of bag and cross-validation scores are quite positive for the Random Forest Classifier.
Testing Random Forest Regressor to predict "times_covered"
End of explanation
treeclf = DecisionTreeClassifier(max_depth = 15, random_state=123)
treeclf.fit(X, y)
scores = cross_val_score(treeclf, X, y, cv=10).mean()
np.mean(np.sqrt(scores))
Explanation: I conclude that I can predict whether a song will be covered far more accurately than how many times it will be covered.<br>
90% vs. 28%
<br><br>
Test with DecisionTreeClassifier
While seeking a less opaque model than the Random Forest, I tried the DecisionTreeClassifier, to take advantage of the very nice method for ranking the importance of the features.
End of explanation
pd.DataFrame({'feature':feature_cols, 'importance':treeclf.feature_importances_}).sort_values('importance').plot(kind='bar',x='feature',figsize=(16,5),fontsize='14',title="Feature Importance")
Explanation: As shown in the bar chart below, year and lyric sentiment are better predictors of whether or not a song is covered.
End of explanation
yticks = np.arange(100, 1000, 100)
data[data.times_covered > 0].groupby('year').times_covered.sum().plot(kind='bar',x='year',y='times_covered',figsize=(16,9))
plt.yticks(yticks)
plt.title('Total songs covered by year of original',size =28)
plt.ylabel('Times Covered (Sum)', size = 24)
### First plot is The Beatles, second The Rolling Stones. Sum of times_covered by year of oriiginal relase of the song.
bar = data.sort_values(by='year').groupby(['year', 'artist'])['times_covered'].sum().unstack('artist')
yticks = np.arange(25, 1000, 50)
bar.plot(kind='bar', stacked=True,figsize=(16,12),subplots='True')
plt.yticks(yticks)
plt.title('Total songs covered per Artist, by year of original',size =24)
plt.ylabel('Times Covered', size = 24)
Explanation: While year may be important, I feel the weight of it is skewed by the results indicating the vast majority of the Beatles catalog has been covered, and well distributed across all years represented.
Let's review some simples metrics comparing the two bands.
A quick measure of songs covered by release year for both artists.
<br> While The Beatle disbanded by 1970, The Stones continue to this day. However, their early work appears far more influential, with the greatest body of influence more or less paralelling that of The more covered Beatles. Let's put a number on that, shall we?
End of explanation
# Throw out covers recorded by each band and see what percentage of their catalogs have been covered.
inf = data[(data.is_cover == 0) & (data.is_covered == 1)].groupby('artist').workid.count()/data[(data.is_cover == 0)].groupby('artist').workid.count()
inf.plot(kind='bar', figsize=(16,5),fontsize=18,rot=40)
# Create new DataFrame with Top 10 most covered songs for each band.
top10 = data[(data.artistid == 1) & (data.is_cover ==0)][['artist','songname','minreleasedate', 'is_cover','times_covered']].sort_values(by='times_covered',ascending=False)[:10]
top10 = top10.append(data[(data.artistid == 0) & (data.is_cover ==0)][['artist','songname','minreleasedate', 'is_cover','times_covered']].sort_values(by='times_covered',ascending=False)[:10])
Explanation: It appears that a pretty large percentage of the artist's catalogs have been covered at least once.<br>
90% of Beatles songs have been covered, 66% of Rolling Stones. By this measure, The Liverpuddlians may be deemed "more influential".
End of explanation
top10['dateint'] = top10['minreleasedate'].apply(lambda x: x.year)
colors = np.where(top10.artist == 'The Beatles', 'b', 'g')
top10.groupby('artist').plot(kind='bar', x='songname', y='times_covered',color=colors,rot=30,fontsize=14,legend=True,figsize=(16,5))
#top10.plot(kind='scatter', x='dateint', y='times_covered', s=220,c=colors, legend='artist',figsize=(12,8))
plt.title('Top 10 Covered Per Artist',size=20)
plt.xlabel('Times Covered')
plt.ylabel('Song')
print top10.groupby('artist').times_covered.mean()
data[data.is_covered > 0].groupby(['artist', 'year']).songname.count().unstack('artist').plot(kind='bar',subplots='True',figsize=(16,9))
plt.title("Original Songs Released Per Year", size=20)
Explanation: "Top of the Pops"
End of explanation
bar2 = data[data.year < 2000].sort_values(by='year',ascending='True').groupby(['year', 'artist'])['times_covered'].sum().unstack('artist')
bar2.plot(kind='area',figsize=(16,5),fontsize=16)
plt.title("Original Songs Released Per Year", size=20)
Explanation: Plot times covered by year.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=101)
# Compute Null accuracy
y_null = np.zeros_like(y_test, dtype=float)
# fill the array with the mean value of y_test
y_null.fill(y_test.mean())
y_null
np.sqrt(metrics.mean_squared_error(y_test, y_null))
Explanation: The Beatles catalog essentialy ends in 1970 when they disbanded (the outliers are most likely bad release dates in my data). The Rolling Stones continue into this century. However, their years of greatest influence are similar, spanning the first 8 years of releases.<br><br>
Test Logistic Regression Model
End of explanation
logreg = LogisticRegression(C=1e9)
#solver='newton-cg',multi_class='multinomial',max_iter=100
logreg.fit(X_train, y_train)
zip(feature_cols, logreg.coef_[0])
print cross_val_score(logreg, X, y, cv=10, scoring='accuracy').mean()
y_pred_prob = logreg.predict_proba(X_test)[:,1 ]
print(y_pred_prob).mean()
Explanation: Null Accuracy result is 37%.
As we will see below, when we fit the Logistic Regression estimator with our data and compute a cross validation score we improve significantly over the null test result.
End of explanation
# plot ROC curve
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_prob)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
# calculate AUC
print metrics.roc_auc_score(y_test, y_pred_prob)
print cross_val_score(logreg, X, y, cv=10, scoring='roc_auc').mean()
Explanation: I am able to predict with 80% probability that a song will be covered.<br><br>
Compute ROC curve and AUC score.
End of explanation
data[['artist','lyric_sent']].groupby('artist').boxplot(return_type='axes',figsize=(16,6),fontsize=16)
Explanation: Lyric Sentiment by Artist
End of explanation |
8,467 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IBM 人员流失预测
Introduction
address
Step1: 1. Exploratory Data Analysis
让我们通过 Pandas 加载 datasets,我们快速看一下前几行,重点的关注是 attrition
Step2: 从数据集中看,我们的目标列是 Attrition
此外,我们的数据是类型和数字数据混合的,对于这些非数字的类别,我们后面会将其编码成数字,这里我们首先探索数据集,首先检查一下数据集的完整性,简单的检查一下数据集中有没有空的或者无穷的数据
Data quality checks
可以使用 isnull() 函数来看有没有空的数据
Step3: Distribution of the dataset
一般前几步会探索数据集特征如何分布,为了实现这个,我们调用 Seaborn plotting 库中的 kdeplot() 函数并且生成双变量图如下
Step4: Correlation of Features
接下来的探索工具是关于矩阵的,通过绘制相关矩阵,我们可以很好的描述特征之间的关联,在 Pandas dataframe 中,我们可以使用 corr 函数可以为 dataframe 的每对列数据提供皮尔森相关系数(也叫矩阵相关系数,用来反映两个变量线性相关程度的统计量)
在这里,我将使用 Plotly 库中的 Heatmap() 函数绘出皮尔森相关系数矩阵
Step5: Takeaway from the plots
从上图中,我们可以看到有相当多的列好像彼此关系很差,一般来说,做一个预测模型,我们的训练数据最好彼此不相关,因为我们不需要冗余数据,在这个例子中,在我们有相当多的相关特征情况下,或许我们应该应用 PCA(Principal Component Analysis --> 主成分分析)来减少特征空间
Pairplot Visualisations
现在让我们创建一些 Seaborn pairplot 并且设置 Attrition 列作为目标变量得到各个特征分布对人员流失的影响
Step6: 2. Feature Engineering & Categorical Encoding
我们对数据集进行了简单的探索,现在我们处理特征工程和对分类进行数字编码,特征工程简单地说就是从已有的特征创建新的特征关系,特征工程非常重要。
在开始之前,我们使用 dtype 方法将数字和分类隔离
Step7: 确定我们的特征包含分类数据,我们可以将 numerical 编码,可以使用 Pandas 的 get_dummies() 方法
Step8: 应用 get_dummies() 方法自动编码, 我们可以很方便的用以下代码看编码后的结果
Step9: 提取出是数字的列
Step10: 我们编码了非数字的变量,并将数字的提取出来,现在我们要将它们合并成最终的训练数据
Step11: Target variable
最后,我们需要目标变量, 由 attrition 列给出,我们需要将其编码,1 代表 Yes, 0 代表 No
Step12: 然而,如果检查 Yes 和 No 的数量就会发现,数据有非常大的偏差
Step13: 因此,我们现在的数据是不平衡的,有很多方法可以解决数据不平衡的问题,在这里我们使用 SMOTE 过采样技术来处理不平衡
3. Implementing Machine Learning Models
进行了一些探索分析和简单的特征工程,确保我们的所有的数据都被编码,我们现在可以建立自己的模型
在这个笔记一开始时,我们说我们的目标是为了评估和对比一些不同模型的表现
分离和测试数据
在我们训练数据之前,需要有一个训练集和测试集,不同于 Kaggle 比赛,一般我们都会有现成的训练集和测试集,这里我们使用 sklearn 来分离数据
Step14: SMOTE to oversample due to the skewness in target
既然我们已经注意到了目标值的不平衡,让我们通过 imblearn 包实现。
Step15: A. Random Forest Classifier
随机森林分类方法是无处不在的决策树,作为独立模型的决策树通常被认为是 "弱学习" 模型,因为它的预测性较差,然而随机森林分类是收集一组决策树,用其组合能力获得较强的预测性能,称为强学习
Initialising Random Forest parameters
我们将使用 scikit-learn 的库中的 Random Forest mode, 我们首先定义我们的参数
Step16: 我们可以使用 scikit-learn 的 RandomForestClassifier() 函数来初始化随机森林并将参数传入
Step17: 我们开始训练
Step18: 现在我们可以在测试数据上进行预测
Step19: 对预测进行打分
Step20: Accuracy of the model
我们观察到,使用随机森林分类可以得到 88% 的正确率,乍一看,这像是一个非常好的模型,如果我们考虑我们的数据分布是 84% yes 和 %26 no,就会发现这个面型预测的和蒙的差不多
Feature Ranking via the Random Forest
sklearn 随机森林分类包含了一个非常方便和有用的属性是 featureimportances,它可以显示出对于特征森林算法来说最重要的特征,下图显示了对于最重要的几个特征
Step21: Most RF important features
Step22: B. Gradient Boosted Classifier
梯度增强法是一种组合技术,非常像随机森林树,是将弱树学习者的组合结合成一棵强树,这个技术涉及到定义一些方法(算法)来最小化损失函数 (loss function)。因此,顾名思义,最小化损失函数的方法就是指梯度下降方法,指向了减少损失函数值的方向。
sklearn 中使用 Gradient Boosted classifier 非常简单,只需要几行代码,我们首先设置分类参数
Step23: 定义了参数后,我们可以训练预测得分了
Step24: Feature Ranking via the Gradient Boosting Model
我们看一下对于 Gradient Boosting Model 最重要的参数 | Python Code:
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# Import statements required for Plotly
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, log_loss
from imblearn.over_sampling import SMOTE
import xgboost
# Import and suppress warnings
import warnings
warnings.filterwarnings('ignore')
Explanation: IBM 人员流失预测
Introduction
address: https://github.com/ghvn7777/kaggle/blob/master/ibm_employee/predict_ibm_attrition.ipynb
保持雇员的快乐和对公司满意度是一个古老的挑战,如果你对雇员投资了很多,而他却离开了,这意味着你还要花费更多时间雇用别人,本着 Kaggle 的精神,让我们构建一个预测模型来根据 IBM 的数据集预测 IBM 员工的流失
这个笔记包括以下内容:
Exploratory Data Analysis: 在这个章节,我们探索数据集分布特征,特征间如何对应并可视化
Feature Engineering and Categorical Encoding: 进行一些特征工程并将我们的的特征编码为多个变量
Implementing Machine Learning models: 我们实现一个随机森林和梯度增强模型,然后看在这些模型中特征的重要性
Let's Go.
End of explanation
attrition = pd.read_csv('./inputs/WA_Fn-UseC_-HR-Employee-Attrition.csv')
attrition.head()
Explanation: 1. Exploratory Data Analysis
让我们通过 Pandas 加载 datasets,我们快速看一下前几行,重点的关注是 attrition
End of explanation
#Looking for NaN
attrition.isnull().any()
Explanation: 从数据集中看,我们的目标列是 Attrition
此外,我们的数据是类型和数字数据混合的,对于这些非数字的类别,我们后面会将其编码成数字,这里我们首先探索数据集,首先检查一下数据集的完整性,简单的检查一下数据集中有没有空的或者无穷的数据
Data quality checks
可以使用 isnull() 函数来看有没有空的数据
End of explanation
# Plotting the KDEplots
f, axes = plt.subplots(3, 3, figsize=(10, 10), sharex=False, sharey=False)
# Defining our colormap scheme
# s本来想调颜色的,后来都手工指定了 0.333....
#s = np.linspace(0, 3, 10) # [0,3] 区间等间隔生成 10 个数
# 创建一系列调色板,light 是调色板的最浅颜色的强度,1表示最强,
# as_cmap 为真表示使用 matplotlib 颜色表
cmap = sns.cubehelix_palette(start=0.0, light=1, as_cmap=True)
# Generate and plot
x = attrition['Age'].values
y = attrition['TotalWorkingYears'].values
# 画出单变量或双变量核密度预测, shade=True 表示数据是双变量时候填充轮廓
# cut=5 表示从每个内核的极端数据点切去几个 bw (带宽, 也是 kdeplot 的参数,作用控制估计与数据的拟合程度)
# cut 越大,整个图像越小数据越密集
# ax 参数指定在哪个轴上绘制,默认使用当前轴
sns.kdeplot(x, y, cmap=cmap, shade=True, cut=5, ax=axes[0,0])
axes[0,0].set( title = 'Age against Total working years')
cmap = sns.cubehelix_palette(start=0.333333333333, light=1, as_cmap=True)
# Generate and plot
x = attrition['Age'].values
y = attrition['DailyRate'].values
sns.kdeplot(x, y, cmap=cmap, shade=True, ax=axes[0,1])
axes[0,1].set( title = 'Age against Daily Rate')
cmap = sns.cubehelix_palette(start=0.666666666667, light=1, as_cmap=True)
# Generate and plot
x = attrition['YearsInCurrentRole'].values
y = attrition['Age'].values
sns.kdeplot(x, y, cmap=cmap, shade=True, ax=axes[0,2])
axes[0,2].set( title = 'Years in role against Age')
cmap = sns.cubehelix_palette(start=1.0, light=1, as_cmap=True)
# Generate and plot
x = attrition['DailyRate'].values
y = attrition['DistanceFromHome'].values
sns.kdeplot(x, y, cmap=cmap, shade=True, ax=axes[1,0])
axes[1,0].set( title = 'Daily Rate against DistancefromHome')
cmap = sns.cubehelix_palette(start=1.333333333333, light=1, as_cmap=True)
# Generate and plot
x = attrition['DailyRate'].values
y = attrition['JobSatisfaction'].values
sns.kdeplot(x, y, cmap=cmap, shade=True, ax=axes[1,1])
axes[1,1].set( title = 'Daily Rate against Job satisfaction')
cmap = sns.cubehelix_palette(start=1.666666666667, light=1, as_cmap=True)
# Generate and plot
x = attrition['YearsAtCompany'].values
y = attrition['JobSatisfaction'].values
sns.kdeplot(x, y, cmap=cmap, shade=True, ax=axes[1,2])
axes[1,2].set( title = 'Daily Rate against distance')
cmap = sns.cubehelix_palette(start=2.0, light=1, as_cmap=True)
# Generate and plot
x = attrition['YearsAtCompany'].values
y = attrition['DailyRate'].values
sns.kdeplot(x, y, cmap=cmap, shade=True, ax=axes[2,0])
axes[2,0].set( title = 'Years at company against Daily Rate')
cmap = sns.cubehelix_palette(start=2.333333333333, light=1, as_cmap=True)
# Generate and plot
x = attrition['RelationshipSatisfaction'].values
y = attrition['YearsWithCurrManager'].values
sns.kdeplot(x, y, cmap=cmap, shade=True, ax=axes[2,1])
axes[2,1].set( title = 'Relationship Satisfaction vs years with manager')
cmap = sns.cubehelix_palette(start=2.666666666667, light=1, as_cmap=True)
# Generate and plot
x = attrition['WorkLifeBalance'].values
y = attrition['JobSatisfaction'].values
sns.kdeplot(x, y, cmap=cmap, shade=True, ax=axes[2,2])
axes[2,2].set( title = 'WorklifeBalance against Satisfaction')
f.tight_layout()
# Define a dictionary for the target mapping
target_map = {'Yes':1, 'No':0}
# Use the pandas apply method to numerically encode our attrition target variable
attrition["Attrition_numerical"] = attrition["Attrition"].apply(lambda x: target_map[x])
attrition
Explanation: Distribution of the dataset
一般前几步会探索数据集特征如何分布,为了实现这个,我们调用 Seaborn plotting 库中的 kdeplot() 函数并且生成双变量图如下:
End of explanation
# creating a list of only numerical values
numerical = [u'Age', u'DailyRate', u'DistanceFromHome', u'Education', u'EmployeeNumber', u'EnvironmentSatisfaction',
u'HourlyRate', u'JobInvolvement', u'JobLevel', u'JobSatisfaction',
u'MonthlyIncome', u'MonthlyRate', u'NumCompaniesWorked',
u'PercentSalaryHike', u'PerformanceRating', u'RelationshipSatisfaction',
u'StockOptionLevel', u'TotalWorkingYears',
u'TrainingTimesLastYear', u'WorkLifeBalance', u'YearsAtCompany',
u'YearsInCurrentRole', u'YearsSinceLastPromotion',
u'YearsWithCurrManager']
data = [
go.Heatmap(
z= attrition[numerical].astype(float).corr().values, # Generating the Pearson correlation
x=attrition[numerical].columns.values,
y=attrition[numerical].columns.values,
colorscale='Viridis',
reversescale = False, #反转色域
text = True,
opacity = 1.0 #不透明度
)
]
layout = go.Layout(
title='Pearson Correlation of numerical features',
xaxis = dict(ticks='', nticks=36),
yaxis = dict(ticks='' ),
width = 900, height = 700,
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='labelled-heatmap')
Explanation: Correlation of Features
接下来的探索工具是关于矩阵的,通过绘制相关矩阵,我们可以很好的描述特征之间的关联,在 Pandas dataframe 中,我们可以使用 corr 函数可以为 dataframe 的每对列数据提供皮尔森相关系数(也叫矩阵相关系数,用来反映两个变量线性相关程度的统计量)
在这里,我将使用 Plotly 库中的 Heatmap() 函数绘出皮尔森相关系数矩阵:
End of explanation
# Refining our list of numerical variables
numerical = [u'Age', u'DailyRate', u'JobSatisfaction',
u'MonthlyIncome', u'PerformanceRating',
u'WorkLifeBalance', u'YearsAtCompany', u'Attrition_numerical']
#g = sns.pairplot(attrition[numerical], hue='Attrition_numerical', palette='seismic', diag_kind = 'kde',diag_kws=dict(shade=True))
#g.set(xticklabels=[])
Explanation: Takeaway from the plots
从上图中,我们可以看到有相当多的列好像彼此关系很差,一般来说,做一个预测模型,我们的训练数据最好彼此不相关,因为我们不需要冗余数据,在这个例子中,在我们有相当多的相关特征情况下,或许我们应该应用 PCA(Principal Component Analysis --> 主成分分析)来减少特征空间
Pairplot Visualisations
现在让我们创建一些 Seaborn pairplot 并且设置 Attrition 列作为目标变量得到各个特征分布对人员流失的影响
End of explanation
attrition
# Drop the Attrition_numerical column from attrition dataset first - Don't want to include that
attrition = attrition.drop(['Attrition_numerical'], axis=1)
# Empty list to store columns with categorical data
categorical = []
for col, value in attrition.iteritems():
if value.dtype == 'object':
categorical.append(col)
# Store the numerical columns in a list numerical
print(categorical)
numerical = attrition.columns.difference(categorical)
numerical
Explanation: 2. Feature Engineering & Categorical Encoding
我们对数据集进行了简单的探索,现在我们处理特征工程和对分类进行数字编码,特征工程简单地说就是从已有的特征创建新的特征关系,特征工程非常重要。
在开始之前,我们使用 dtype 方法将数字和分类隔离
End of explanation
# Store the categorical data in a dataframe called attrition_cat
attrition_cat = attrition[categorical] #提取出不是数字的列
attrition_cat = attrition_cat.drop(['Attrition'], axis=1) # Dropping the target column
print(attrition_cat)
Explanation: 确定我们的特征包含分类数据,我们可以将 numerical 编码,可以使用 Pandas 的 get_dummies() 方法
End of explanation
attrition_cat = pd.get_dummies(attrition_cat)
attrition_cat.head(3)
Explanation: 应用 get_dummies() 方法自动编码, 我们可以很方便的用以下代码看编码后的结果
End of explanation
# Store the numerical features to a dataframe attrition_num
attrition_num = attrition[numerical]
Explanation: 提取出是数字的列
End of explanation
# Concat the two dataframes together columnwise
attrition_final = pd.concat([attrition_num, attrition_cat], axis=1)
Explanation: 我们编码了非数字的变量,并将数字的提取出来,现在我们要将它们合并成最终的训练数据
End of explanation
# Define a dictionary for the target mapping
target_map = {'Yes':1, 'No':0}
# Use the pandas apply method to numerically encode our attrition target variable
target = attrition["Attrition"].apply(lambda x: target_map[x])
target.head(3)
Explanation: Target variable
最后,我们需要目标变量, 由 attrition 列给出,我们需要将其编码,1 代表 Yes, 0 代表 No
End of explanation
data = [go.Bar(
x=attrition["Attrition"].value_counts().index.values,
y= attrition["Attrition"].value_counts().values
)]
py.iplot(data, filename='basic-bar')
Explanation: 然而,如果检查 Yes 和 No 的数量就会发现,数据有非常大的偏差
End of explanation
# Import the train_test_split method
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import StratifiedShuffleSplit
# Split data into train and test sets as well as for validation and testing
train, test, target_train, target_val = train_test_split(attrition_final, target, train_size= 0.75,random_state=0);
#train, test, target_train, target_val = StratifiedShuffleSplit(attrition_final, target, random_state=0);
Explanation: 因此,我们现在的数据是不平衡的,有很多方法可以解决数据不平衡的问题,在这里我们使用 SMOTE 过采样技术来处理不平衡
3. Implementing Machine Learning Models
进行了一些探索分析和简单的特征工程,确保我们的所有的数据都被编码,我们现在可以建立自己的模型
在这个笔记一开始时,我们说我们的目标是为了评估和对比一些不同模型的表现
分离和测试数据
在我们训练数据之前,需要有一个训练集和测试集,不同于 Kaggle 比赛,一般我们都会有现成的训练集和测试集,这里我们使用 sklearn 来分离数据
End of explanation
oversampler=SMOTE(random_state=0)
smote_train, smote_target = oversampler.fit_sample(train,target_train)
Explanation: SMOTE to oversample due to the skewness in target
既然我们已经注意到了目标值的不平衡,让我们通过 imblearn 包实现。
End of explanation
seed = 0 # We set our random seed to zero for reproducibility
# Random Forest parameters
rf_params = {
'n_jobs': -1,
'n_estimators': 800,
'warm_start': True,
'max_features': 0.3,
'max_depth': 9,
'min_samples_leaf': 2,
'max_features' : 'sqrt',
'random_state' : seed,
'verbose': 0
}
Explanation: A. Random Forest Classifier
随机森林分类方法是无处不在的决策树,作为独立模型的决策树通常被认为是 "弱学习" 模型,因为它的预测性较差,然而随机森林分类是收集一组决策树,用其组合能力获得较强的预测性能,称为强学习
Initialising Random Forest parameters
我们将使用 scikit-learn 的库中的 Random Forest mode, 我们首先定义我们的参数
End of explanation
rf = RandomForestClassifier(**rf_params)
Explanation: 我们可以使用 scikit-learn 的 RandomForestClassifier() 函数来初始化随机森林并将参数传入
End of explanation
rf.fit(smote_train, smote_target)
print("Fitting of Random Forest as finished")
Explanation: 我们开始训练:
End of explanation
rf_predictions = rf.predict(test)
print("Predictions finished")
Explanation: 现在我们可以在测试数据上进行预测:
End of explanation
accuracy_score(target_val, rf_predictions)
Explanation: 对预测进行打分:
End of explanation
# Scatter plot
trace = go.Scatter(
y = rf.feature_importances_,
x = attrition_final.columns.values,
mode='markers',
marker=dict(
sizemode = 'diameter',
sizeref = 1,
size = 13,
#size= rf.feature_importances_,
#color = np.random.randn(500), #set color equal to a variable
color = rf.feature_importances_,
colorscale='Portland',
showscale=True
),
text = attrition_final.columns.values
)
data = [trace]
layout= go.Layout(
autosize= True,
title= 'Random Forest Feature Importance',
hovermode= 'closest',
xaxis= dict(
ticklen= 5,
showgrid=False,
zeroline=False,
showline=False
),
yaxis=dict(
title= 'Feature Importance',
showgrid=False,
zeroline=False,
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig,filename='scatter2010')
Explanation: Accuracy of the model
我们观察到,使用随机森林分类可以得到 88% 的正确率,乍一看,这像是一个非常好的模型,如果我们考虑我们的数据分布是 84% yes 和 %26 no,就会发现这个面型预测的和蒙的差不多
Feature Ranking via the Random Forest
sklearn 随机森林分类包含了一个非常方便和有用的属性是 featureimportances,它可以显示出对于特征森林算法来说最重要的特征,下图显示了对于最重要的几个特征:
End of explanation
from sklearn import tree
from IPython.display import Image as PImage
from subprocess import check_call
from PIL import Image, ImageDraw, ImageFont
import re
decision_tree = tree.DecisionTreeClassifier(max_depth = 4)
decision_tree.fit(train, target_train)
# Predicting results for test dataset
y_pred = decision_tree.predict(test)
# Export our trained model as a .dot file
with open("tree1.dot", 'w') as f:
f = tree.export_graphviz(decision_tree,
out_file=f,
max_depth = 4,
impurity = False,
feature_names = attrition_final.columns.values,
class_names = ['No', 'Yes'],
rounded = True,
filled= True )
#Convert .dot to .png to allow display in web notebook
check_call(['dot','-Tpng','tree1.dot','-o','tree1.png'])
# Annotating chart with PIL
img = Image.open("tree1.png")
draw = ImageDraw.Draw(img)
img.save('sample-out.png')
PImage("sample-out.png")
Explanation: Most RF important features: Overtime, Marital Status
通过上图可以看出对于我们最重要的几个特征,算法将加班特征的重要性拍到最高,其次是婚姻状况
我不知道对于你来说哪个重要,但是对于我来说加班确实影响到了我对工作的满意程度,也许这样我们队分类器就不会感到惊讶,因为我们的分类器已经达到了目标并把加班时间重要性排到最高
Visualising Tree Diagram with Graphviz
让我们显示我们的特征树,可以使用 DecisionTreeClassifier 对象遍历单个决策树特征并使用 export_graphviz() 函数来显示 png 图像:
End of explanation
# Gradient Boosting Parameters
gb_params ={
'n_estimators': 500,
'max_features': 0.9,
'learning_rate' : 0.2,
'max_depth': 11,
'min_samples_leaf': 2,
'subsample': 1,
'max_features' : 'sqrt',
'random_state' : seed,
'verbose': 0
}
Explanation: B. Gradient Boosted Classifier
梯度增强法是一种组合技术,非常像随机森林树,是将弱树学习者的组合结合成一棵强树,这个技术涉及到定义一些方法(算法)来最小化损失函数 (loss function)。因此,顾名思义,最小化损失函数的方法就是指梯度下降方法,指向了减少损失函数值的方向。
sklearn 中使用 Gradient Boosted classifier 非常简单,只需要几行代码,我们首先设置分类参数:
Initialising Gradient Boosting Parameters
一般来说,在设置梯度增强分类有几个关键参数, 估计数量, 模型的最大深度,每个叶子的最少样本。
End of explanation
gb = GradientBoostingClassifier(**gb_params)
# Fit the model to our SMOTEd train and target
gb.fit(smote_train, smote_target)
# Get our predictions
gb_predictions = gb.predict(test)
print("Predictions have finished")
accuracy_score(target_val, gb_predictions)
Explanation: 定义了参数后,我们可以训练预测得分了
End of explanation
# Scatter plot
trace = go.Scatter(
y = gb.feature_importances_,
x = attrition_final.columns.values,
mode='markers',
marker=dict(
sizemode = 'diameter',
sizeref = 1,
size = 13,
#size= rf.feature_importances_,
#color = np.random.randn(500), #set color equal to a variable
color = gb.feature_importances_,
colorscale='Portland',
showscale=True
),
text = attrition_final.columns.values
)
data = [trace]
layout= go.Layout(
autosize= True,
title= 'Gradient Boosting Model Feature Importance',
hovermode= 'closest',
xaxis= dict(
ticklen= 5,
showgrid=False,
zeroline=False,
showline=False
),
yaxis=dict(
title= 'Feature Importance',
showgrid=False,
zeroline=False,
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig,filename='scatter')
Explanation: Feature Ranking via the Gradient Boosting Model
我们看一下对于 Gradient Boosting Model 最重要的参数
End of explanation |
8,468 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Don't forget to delete the hdmi_out and hdmi_in when finished
Generic Kernal Filter Notebook
In this notebook, we have provided an user interface which allows user to generate various image filters by controlling the values in a 3x3 kernal matrix, the dividing factor and the bias value.
There are some examples at the following webpages
Step1: Start streaming from device connected to HDMI input on the PYNQ Board
Step2: Creates user interface
In this section, we create 10 registers which hold the values of the Kernal matrix, the dividing factor and the bias value. We also create a slider which allows user to control the values store in the registers.
Step3: Continue to create user interface
Step4: User interface instruction
At this point, the streaming may not work properly. Please run the code section below. Afterwards, press the 'HDMI Reset" button to reset the HDMI input and output. The streaming should now work properly.
In order to start applying filter on the stream, press the 'Kernal Filter' button. The kernal filter is default as a Box Blur filter.
Afterwards, users can change to any kernal filter they want by changing the value on the slider.
Each values is denoted by the equation below | Python Code:
from pynq.drivers.video import HDMI
from pynq import Bitstream_Part
from pynq.board import Register
from pynq import Overlay
Overlay("demo.bit").download()
Explanation: Don't forget to delete the hdmi_out and hdmi_in when finished
Generic Kernal Filter Notebook
In this notebook, we have provided an user interface which allows user to generate various image filters by controlling the values in a 3x3 kernal matrix, the dividing factor and the bias value.
There are some examples at the following webpages:
http://lodev.org/cgtutor/filtering.html#Emboss
Alternatively you can search online for kernel image processing.
Import libraries and download base bitstream
End of explanation
hdmi_in = HDMI('in')
hdmi_out = HDMI('out', frame_list=hdmi_in.frame_list)
hdmi_out.mode(3)
hdmi_out.start()
hdmi_in.start()
Explanation: Start streaming from device connected to HDMI input on the PYNQ Board
End of explanation
R0 =Register(0)
R1 =Register(1)
R2 =Register(2)
R3 =Register(3)
R4 =Register(4)
R5 =Register(5)
R6 =Register(6)
R7 =Register(7)
R8 =Register(8)
R9 =Register(9)
R10 =Register(10)
import ipywidgets as widgets
R0_s = widgets.IntSlider(
value=1,
min=-128,
max=127,
step=1,
description='M_0:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R1_s = widgets.IntSlider(
value=1,
min=-128,
max=127,
step=1,
description='M_1:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R2_s = widgets.IntSlider(
value=1,
min=-128,
max=127,
step=1,
description='M_2:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R3_s = widgets.IntSlider(
value=1,
min=-128,
max=127,
step=1,
description='M_3:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R4_s = widgets.IntSlider(
value=1,
min=-128,
max=127,
step=1,
description='M_4:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R5_s = widgets.IntSlider(
value=1,
min=-128,
max=127,
step=1,
description='M_5:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R6_s = widgets.IntSlider(
value=1,
min=-128,
max=127,
step=1,
description='M_6:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R7_s = widgets.IntSlider(
value=1,
min=-128,
max=127,
step=1,
description='M_7:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R8_s = widgets.IntSlider(
value=1,
min=-128,
max=127,
step=1,
description='M_8:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R9_s = widgets.IntSlider(
value=9,
min=1,
max=127,
step=1,
description='Factor:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R10_s = widgets.IntSlider(
value=0,
min=0,
max=255,
step=1,
description='Bias:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
def update_r0(*args):
R0.write(R0_s.value)
R0_s.observe(update_r0, 'value')
def update_r1(*args):
R1.write(R1_s.value)
R1_s.observe(update_r1, 'value')
def update_r2(*args):
R2.write(R2_s.value)
R2_s.observe(update_r2, 'value')
def update_r3(*args):
R3.write(R3_s.value)
R3_s.observe(update_r3, 'value')
def update_r4(*args):
R4.write(R4_s.value)
R4_s.observe(update_r4, 'value')
def update_r5(*args):
R5.write(R5_s.value)
R5_s.observe(update_r5, 'value')
def update_r6(*args):
R6.write(R6_s.value)
R6_s.observe(update_r6, 'value')
def update_r7(*args):
R7.write(R7_s.value)
R7_s.observe(update_r7, 'value')
def update_r8(*args):
R8.write(R8_s.value)
R8_s.observe(update_r8, 'value')
def update_r9(*args):
R9.write(R9_s.value)
R9_s.observe(update_r9, 'value')
def update_r10(*args):
R10.write(R10_s.value)
R10_s.observe(update_r10, 'value')
Explanation: Creates user interface
In this section, we create 10 registers which hold the values of the Kernal matrix, the dividing factor and the bias value. We also create a slider which allows user to control the values store in the registers.
End of explanation
from IPython.display import clear_output
from ipywidgets import Button, HBox, VBox
words = ['HDMI Reset', 'Kernal Filter']
items = [Button(description=w) for w in words]
def on_hdmi_clicked(b):
hdmi_out.stop()
hdmi_in.stop()
hdmi_out.start()
hdmi_in.start()
def on_Kernal_clicked(b):
Bitstream_Part("Generic_Filter_p.bit").download()
R3_s.disabled = False;
R0.write(1)
R1.write(1)
R2.write(1)
R3.write(1)
R4.write(1)
R5.write(1)
R6.write(1)
R7.write(1)
R8.write(1)
R9.write(9)
R10.write(0)
R0_s.description='M_0'
R0_s.value = 1
R0_s.max = 127
R1_s.description='M_1'
R1_s.value = 1
R1_s.max = 127
R2_s.description='M_2'
R2_s.value = 1
R2_s.max = 127
R3_s.description='M_3'
R3_s.value = 1
R3_s.max = 127
R4_s.description='M_4'
R4_s.value = 1
R4_s.max = 127
R5_s.description='M_5'
R5_s.value = 1
R5_s.max = 127
R6_s.description='M_6'
R6_s.value = 1
R6_s.max = 127
R7_s.description='M_7'
R7_s.value = 1
R7_s.max = 127
R8_s.description='M_8'
R8_s.value = 1
R8_s.max = 127
R9_s.description='Factor'
R9_s.value = 9
R9_s.max = 127
R10_s.description='Bias'
R10_s.value = 0
R10_s.max = 255
items[0].on_click(on_hdmi_clicked)
items[1].on_click(on_Kernal_clicked)
Explanation: Continue to create user interface
End of explanation
HBox([VBox([items[0], items[1]]),R0_s,R1_s,R2_s,R3_s,R4_s,R5_s,R6_s,R7_s,R8_s,R9_s,R10_s])
hdmi_in.stop()
hdmi_out.stop()
del hdmi_in
del hdmi_out
Explanation: User interface instruction
At this point, the streaming may not work properly. Please run the code section below. Afterwards, press the 'HDMI Reset" button to reset the HDMI input and output. The streaming should now work properly.
In order to start applying filter on the stream, press the 'Kernal Filter' button. The kernal filter is default as a Box Blur filter.
Afterwards, users can change to any kernal filter they want by changing the value on the slider.
Each values is denoted by the equation below:
[M0 M1 M2]
[M3 M4 M5]
[M6 M7 M8] / Factor + Bias
End of explanation |
8,469 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2017년 2학기 공학수학 기말고사
이름
Step1: 예를 들어, a 어레이를 이용하여 아래 모양의 어레이를 생성할 수 있다.
$$\left [ \begin{matrix} 30 & 32 \ 50 & 52 \end{matrix} \right ]$$
Step2: 문제 1.
(1) a 어레이에 인덱싱과 슬라이싱을 이용하여 아래 모양의 어레이를 생성하라.
$$\left [ \begin{matrix} 11 & 13 & 15 \ 31 & 33 & 35 \ 51 & 53 & 55 \end{matrix} \right ]$$
```
.
```
(2) a 어레이에 인덱싱과 슬라이싱을 이용하여 아래 모양의 어레이를 생성하라.
$$\left [ \begin{matrix} 2 & 12 & 22 & 32 & 42 & 52 \end{matrix} \right ]$$
```
.
```
(3) 마스크 인덱싱을 사용하여 아래 결과가 나오도록 하라.
array([0, 3, 12, 15, 21, 24, 30, 33, 42, 45, 51, 54])
```
.
```
정수 인덱싱
정수 인덱싱을 사용하여 아래 결과가 나오도록 할 수 있다.
array([12, 23, 34])
Step3: (4) 정수 인덱싱을 사용하여 아래 모양의 어레이가 나오도록 하라.
$$\left [ \begin{matrix} 30 & 32 & 35 \ 40 & 42 & 45 \ 50 & 52 & 55 \end{matrix} \right ]$$
```
.
```
데이터 분석
데이터는 다음과 같다.
미국의 51개 주(State)별 담배(식물) 도매가격 및 판매 일자
Step4: ```
.
```
(2) prices_pd의 처음 10개의 데이터 출력되도록 코드를 작성하라.
```
.
```
(3) 아래의 코드를 설명하라.
Step5: ```
.
```
문제 3.
다음과 같은 데이터가 있다.
쿵푸 교실 참가자 나이
Step6: (1) 아래 코드를 설명하라.
Step7: ```
.
```
(2) (1)의 결과가 쿵푸 교실 참자가들의 나이를 대표할 수 있는가?
이와 같은 현상이 발생하는 이유를 설명하라.
```
.
```
(3) 위와 같은 현상을 피하기 위해서 어떤 값을 대푯값으로 해야 하는지 설명하라.
```
.
```
시리즈
다음과 같은 시리즈(series)가 있다.
Step8: 문제 4.
아래 코드를 설명하고, 출력된 결과를 말하여라.
Step9: ```
.
```
데이터 프레임
문제 5.
중첩 사전을 이용하여 데이터 프레임을 생성할 수 있다.
Step10: 추가로 아래의 코드를 실행하자. | Python Code:
a = np.arange(6) + np.arange(0, 51, 10)[:, np.newaxis]
a
Explanation: 2017년 2학기 공학수학 기말고사
이름 :
학번 :
시험에서 사용하는 모듈 임포트 하기
import __future__ import division, print_function
import numpy as np
import pandas as pd
from datetime import datetime as dt
넘파이 어레이 인덱싱과 슬라이싱
아래 코드로 생성된 어레이를 이용하는 문제이다.
End of explanation
a[3::2, :4:2]
Explanation: 예를 들어, a 어레이를 이용하여 아래 모양의 어레이를 생성할 수 있다.
$$\left [ \begin{matrix} 30 & 32 \ 50 & 52 \end{matrix} \right ]$$
End of explanation
a[(1, 2, 3), (2, 3, 4)]
Explanation: 문제 1.
(1) a 어레이에 인덱싱과 슬라이싱을 이용하여 아래 모양의 어레이를 생성하라.
$$\left [ \begin{matrix} 11 & 13 & 15 \ 31 & 33 & 35 \ 51 & 53 & 55 \end{matrix} \right ]$$
```
.
```
(2) a 어레이에 인덱싱과 슬라이싱을 이용하여 아래 모양의 어레이를 생성하라.
$$\left [ \begin{matrix} 2 & 12 & 22 & 32 & 42 & 52 \end{matrix} \right ]$$
```
.
```
(3) 마스크 인덱싱을 사용하여 아래 결과가 나오도록 하라.
array([0, 3, 12, 15, 21, 24, 30, 33, 42, 45, 51, 54])
```
.
```
정수 인덱싱
정수 인덱싱을 사용하여 아래 결과가 나오도록 할 수 있다.
array([12, 23, 34])
End of explanation
prices_pd = pd.read_csv("data/Weed_Price.csv", parse_dates = [-1])
Explanation: (4) 정수 인덱싱을 사용하여 아래 모양의 어레이가 나오도록 하라.
$$\left [ \begin{matrix} 30 & 32 & 35 \ 40 & 42 & 45 \ 50 & 52 & 55 \end{matrix} \right ]$$
```
.
```
데이터 분석
데이터는 다음과 같다.
미국의 51개 주(State)별 담배(식물) 도매가격 및 판매 일자: Weed_Price.csv
아래 그림은 미국의 주별 담배(식물) 판매 데이터를 담은 Weed_Price.csv 파일을 엑셀로 읽었을 때의 일부를 보여준다. 실제 데이터량은 22899개 이며, 아래 그림에는 5개의 데이터만을 보여주고 있다.
주의 : 1번줄은 테이블의 열별 목록(column names)을 담고 있다.
열별 목록 : State, HighQ, HighQN, MedQ, MedQN, LowQ, LowQN, date
<p>
<table cellspacing="20">
<tr>
<td>
<img src="weed_price.png", width=600>
</td>
</tr>
</table>
</p>
문제 2.
(1) 아래 코드를 설명하라.
End of explanation
def getYear(x):
return x.year
year_col = prices_pd.date.apply(getYear)
prices_pd["year"] = year_col
prices_pd.tail()
Explanation: ```
.
```
(2) prices_pd의 처음 10개의 데이터 출력되도록 코드를 작성하라.
```
.
```
(3) 아래의 코드를 설명하라.
End of explanation
k_age = pd.read_csv("kungfu_age.csv")
Explanation: ```
.
```
문제 3.
다음과 같은 데이터가 있다.
쿵푸 교실 참가자 나이 : kungfu_age.csv
아래의 그림은 쿵푸 교실 참가자의 나이를 담은 kungfu_age.csv 파일을 엑셀로 읽었을 때 일부를 보여준다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="age.png", width=150>
</td>
</tr>
</table>
</p>
참가자들의 나이를 표로 살펴보면 아래와 같다.
|나이 | 19 | 20 | 21 | 145 | 147 |
|---|---|---|---|---|---|
| 도수 | 3 | 8 | 3 | 1 | 1 |
End of explanation
k_sum = k_age['age'].sum()
k_count = k_age['age'].count()
k_sum / k_count
Explanation: (1) 아래 코드를 설명하라.
End of explanation
se = pd.Series(['blue', 'purple', 'yellow'], index=[0, 2, 4])
se
Explanation: ```
.
```
(2) (1)의 결과가 쿵푸 교실 참자가들의 나이를 대표할 수 있는가?
이와 같은 현상이 발생하는 이유를 설명하라.
```
.
```
(3) 위와 같은 현상을 피하기 위해서 어떤 값을 대푯값으로 해야 하는지 설명하라.
```
.
```
시리즈
다음과 같은 시리즈(series)가 있다.
End of explanation
se.reindex(range(6), method='nearest')
Explanation: 문제 4.
아래 코드를 설명하고, 출력된 결과를 말하여라.
End of explanation
pop = {'Nevada' : {2001: 2.4, 2002: 2.9},
'Ohio': {2000: 1.5, 2001: 1.7, 2002: 3.6}}
df = pd.DataFrame(pop)
df
dfT = df.T
Explanation: ```
.
```
데이터 프레임
문제 5.
중첩 사전을 이용하여 데이터 프레임을 생성할 수 있다.
End of explanation
df['Nevada'].iloc[0] = 1.0
Explanation: 추가로 아래의 코드를 실행하자.
End of explanation |
8,470 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm working on a problem that has to do with calculating angles of refraction and what not. However, it seems that I'm unable to use the numpy.sin() function in degrees. I have tried to use numpy.degrees() and numpy.rad2deg(). | Problem:
import numpy as np
degree = 90
result = np.sin(np.deg2rad(degree)) |
8,471 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Setting the environment
Step8: Define simple custom strategy
Step9: Configure environment
Step10: Take a look...
Step11: Time to run
Step12: <a name="full"></a>Full Throttle setup | Python Code:
import sys
sys.path.insert(0,'..')
import IPython.display as Display
import PIL.Image as Image
import numpy as np
import random
from gym import spaces
from btgym import BTgymEnv, BTgymBaseStrategy, BTgymDataset
# Handy functions:
def show_rendered_image(rgb_array):
Convert numpy array to RGB image using PILLOW and
show it inline using IPykernel.
Display.display(Image.fromarray(rgb_array))
def render_all_modes(env):
Retrieve and show environment renderings
for all supported modes.
for mode in env.metadata['render.modes']:
print('[{}] mode:'.format(mode))
show_rendered_image(env.render(mode))
def take_some_steps(env, some_steps):
Just does it. Acting randomly.
for step in range(some_steps):
rnd_action = env.action_space.sample()
o, r, d, i = env.step(rnd_action)
if d:
print('Episode finished,')
break
print(step+1, 'actions made.\n')
def under_the_hood(env):
Shows environment internals.
for attr in ['dataset','strategy','engine','renderer','network_address']:
print ('\nEnv.{}: {}'.format(attr, getattr(env, attr)))
for params_name, params_dict in env.params.items():
print('\nParameters [{}]: '.format(params_name))
for key, value in params_dict.items():
print('{} : {}'.format(key,value))
Explanation: Setting the environment: full power.
Or: making gym environment happy with your very own backtrader engine.
This example assumes close familarity with Backtrader conceptions and operation worflow.
One should at least run through Quickstart tutorial: https://www.backtrader.com/docu/quickstart/quickstart.html
Typical workfolw for traditional Backtrader backtesting procedure (recap):
Define backtrader core engine:
python
import backtrader as bt
import backtrader.feeds as btfeeds
engine = bt.Cerebro()
Add some starategy class, wich has been prepared in advance as backtrader base Strategy() subclass and should define decision-making logic:
python
engine.addstrategy(MyStrategy)
Set broker options, such as: cash, commission, slippage, etc.:
python
engine.setcash(100000)
engine.setcommission(0.001)
Add analyzers, observers, sizers, writers to own needs:
python
engine.addobserver(bt.observers.Trades)
engine.addobserver(bt.observers.BuySell)
engine.addanalyzer(bt.analyzers.DrawDown, _name='drawdown')
engine.addsizer(bt.sizers.SizerFix, stake=1000)
Define and add data feed from one or another source (live feed is possible):
python
MyData = btfeeds.GenericCSVData(dataname=CSVfilename.csv)
engine.addata(MyData)
Now backtrader enigine is ready to run backtesting:
python
results = engine.run()
After that you can print, analyze and think on results:
python
engine.plot()
my_disaster_drowdown = results[0].analyzers.drawdown.get_analysis()
For BTgym, same principles apply with several differences:
strategy you prepare will be subclass of base BTgymStrategy,
wich contains specific to RL setup methods and parameters;
this startegy will not contain buy/sell decision-making logic - this part will go to RL agent;
you define you dataset by creating BTgymDataset class instance;
you don't add data to your bt.cerebro(). Just pass dataset to environment, BTgym server will do the rest.
you dont run backtrader engine manually via run() method, server will do.
There are three levels to BTgym configuration:
Light:
use kwargs when making envronment. See 'basic' example for details;
3/4:
subclass BTgymStrategy: override get_state(), get_done(), get_reward(), get_info()
and [maybe] next() methods to get own state, reward definition, order execution logic, actions etc;
pass this strategy to environment via strategy kwarg along with other parameters;
[optionally] make instance of BTgymDataset class as your custom dataset an pass it via dataset kwarg.
Full throttle:
subclass strategy as in '3/4';
define bt.Cerebro(): set broker parameters, add all required observers,
analysers, stakes and other bells and whistles;
attach your '3/4'-strategy;
pass that snowball to environment via engine kwarg;
[opt.] make and pass dataset as in '3/4'.
Environment kwargs reference:
as for v0.0.4
```python
Dataset parameters:
filename=None, # Source CSV data file;
# Episode data params:
start_weekdays=[0, 1, 2, ], # Only weekdays from the list will be used for episode start.
start_00=True, # Episode start time will be set to first record of the day (usually 00:00).
episode_duration={'days': 1, 'hours': 23, 'minutes': 55}, # Maximum episode time duration in d:h:m:
time_gap={'hours': 5}, # Maximum data time gap allowed within sample in d:h.
# If set < 1 day, samples containing weekends and holidays gaps will be rejected.
Backtrader engine parameters:
start_cash=10.0, # initial trading capital.
broker_commission=0.001, # trade execution commission, default is 0.1% of operation value.
fixed_stake=10, # single trade stake is fixed type by def.
Strategy related parameters:
Observation state shape is dictionary of Gym spaces,
at least should contain raw_state field.
By convention first dimension of every Gym Box space is time embedding one;
one can define any shape; should match env.observation_space.shape.
observation space state min/max values,
For `raw_state' - absolute min/max values from BTgymDataset will be used.
state_shape=dict(
raw_state=spaces.Box(
shape=(10, 4),
low=-100,
high=100,
)
),
drawdown_call=90, # episode maximum drawdown threshold, default is 90% of initial value.
portfolio_actions=('hold', 'buy', 'sell', 'close'),
# agent actions,
# should consist with BTgymStrategy order execution logic;
# defaults are (env.side): 0 - 'do nothing', 1 - 'buy', 2 - 'sell', 3 - 'close position'.
skip_frame=1,
# Number of environment steps to skip before returning next response,
# e.g. if set to 10 -- agent will interact with environment every 10th episode step;
# Every other step agent's action is assumed to be 'hold'.
# Note: INFO part of environment response is a list of all skipped frame's info's,
# i.e. [info[-9], info[-8], ..., info[0].
Rendering controls:
render_state_as_image = True
render_state_channel=0
render_size_human = (6, 3.5)
render_size_statet = (7, 3.5)
render_size_episode = (12,8)
render_dpi=75
render_plotstyle = 'seaborn'
render_cmap = 'PRGn'
render_xlabel = 'Relative timesteps'
render_ylabel = 'Value'
render_title = 'step: {}, state observation min: {:.4f}, max: {:.4f}'
render_boxtext = dict(fontsize=12,
fontweight='bold',
color='w',
bbox={'facecolor': 'k', 'alpha': 0.3, 'pad': 3},
)
Other:
port=5500, # network port to use.
network_address='tcp://127.0.0.1:', # using localhost.
verbose=0, # verbosity mode: 0 - silent, 1 - info level, 2 - debugging level (lot of traffic!).
```
Kwargs applying logic:
if <engine> kwarg is given:
do not use default engine and strategy parameters;
ignore <startegy> kwarg and all startegy and engine-related kwargs;
else (no <engine>):
use default engine parameters;
if any engine-related kwarg is given:
override corresponding default parameter;
if <strategy> is given:
do not use default strategy parameters;
if any strategy related kwarg is given:
override corresponding strategy parameter;
else (no <strategy>):
use default strategy parameters;
if any strategy related kwarg is given:
override corresponding strategy parameter;
if <dataset> kwarg is given:
do not use default dataset parameters;
ignore dataset related kwargs;
else (no <dataset>):
use default dataset parameters;
if any dataset related kwarg is given:
override corresponding dataset parameter;
If any <other> kwarg is given:
override corr. default parameter.
<a name="3/4"></a> 3/4. 'State and Reward' with BTgymStrategy.
There are parameters BTgymStrategy class holds.
Point it out: it's strategy parameters, not environment ones (though names are the same as above)!
```python
NEW at v0.6: Note that btgym uses new OPenAI Gym space defined in: gym.spaces.Dict which is in fact
[possibly nested] dictionary of base Gym spaces. You can use gym.spaces.Dict if you have
latest Gym version from repo or use equivalent btgym.spaces.DictSpace wrapper instead.
Thus, space_shape param directly translites into Dict space.
Observation state shape is dictionary of Gym spaces,
at least should contain raw_state field.
By convention first dimension of every Gym Box space is time embedding one;
one can define any shape; should match env.observation_space.shape.
observation space state min/max values,
For `raw_state' - absolute min/max values from BTgymDataset will be used.
state_shape=dict(
raw_state=spaces.Box(
shape=(10, 4),
low=-100,
high=100,
)
),
drawdown_call=90, # episode maximum drawdown threshold, default is 90% of initial value.
portfolio_actions=('hold', 'buy', 'sell', 'close'),
# agent actions,
# should consist with BTgymStrategy order execution logic;
# defaults are (env.side): 0 - 'do nothing', 1 - 'buy', 2 - 'sell', 3 - 'close position'.
skip_frame=1,
# Number of environment steps to skip before returning next response,
# e.g. if set to 10 -- agent will interact with environment every 10th episode step;
# Every other step agent's action is assumed to be 'hold'.
# Note: INFO part of environment response is a list of all skipped frame's info's,
# i.e. [info[-9], info[-8], ..., info[0].
```
When maiking own subclass, it's one's responsibility to set those in consistency.
End of explanation
class MyStrategy(BTgymBaseStrategy):
Example subclass of BTgym inner computation startegy,
overrides default get_state() and get_reward() methods.
def get_price_gradients_state(self):
This method follows naming cinvention: get_[state_modality_name]_state
Returns normalized environment observation state
by computing time-embedded vector
of price gradients.
# Prepare:
sigmoid = lambda x: 1/(1 + np.exp(-x))
# T is 'gamma-like' signal hyperparameter
# for our signal to be in about [-5,+5] range before passing it to sigmoid;
# tweak it by hand to add/remove "peaks supressing":
T = 1.2e+4
# Use default strategy observation variable to get
# time-embedded state observation as [m,4] numpy matrix, where
# 4 - number of signal features == state_shape[-1],
# m - time-embedding length == state_shape[0] == <set by user>.
X = self.raw_state
# ...while iterating, inner _get_raw_state() method is called just before this one,
# so variable `self.raw_state` is fresh and ready to use.
# Compute gradients with respect to time-embedding (last) dimension:
dX = np.gradient(X)[0]
# Squash values in [0,1]:
return sigmoid(dX * T)
def get_reward(self):
Computes reward as log utility of current to initial portfolio value ratio.
return float(np.log(self.stats.broker.value[0] / self.env.broker.startingcash))
Explanation: Define simple custom strategy:
Note using of inner startegy variable raw_state.
End of explanation
# Define dataset:
MyDataset = BTgymDataset(
filename='../examples/data/DAT_ASCII_EURUSD_M1_2016.csv',
start_weekdays=[0, 1,],
# leave all other to defaults,
)
env = BTgymEnv(
dataset=MyDataset,
strategy=MyStrategy,
state_shape={
'raw': spaces.Box(low=-10, high=10, shape=(4,4)), # renered under 'human' name
'price_gradients': spaces.Box(low=0, high=1, shape=(4,4))
},
drawdown_call=30,
skip_frame=5,
# use default agent actions,
# use default engine,
start_cash=100.0,
# use default commission,
# use default stake,
# use default network port,
render_modes=['episode', 'human', 'price_gradients'],
render_state_as_image = False,
render_ylabel = 'Price Gradient',
# leave other rendering p. to dedaults,
verbose=1,
)
Explanation: Configure environment:
All strategy parameters shown above that are not meant to be left defaults should be passed to environmnet as kwargs.
when verbose=1, pay attention to log output what classes been used (base or custom).
End of explanation
under_the_hood(env)
Explanation: Take a look...
End of explanation
env.reset()
take_some_steps(env, 100)
render_all_modes(env)
Explanation: Time to run:
Play with number of steps. Comment out env.reset() not to restart episode every time you run th cell.
Refer to 'rendering howto' to get sense of how renerings are updated.
End of explanation
# Clean up:
env.close()
# Now we need it:
import backtrader as bt
# Define dataset:
MyDataset = BTgymDataset(
filename='../examples/data/DAT_ASCII_EURUSD_M1_2016.csv',
start_weekdays=[0, 1,],
episode_duration={'days': 2, 'hours': 23, 'minutes': 55}, # episode duration set to about 3 days (2:23:55),
# leave all other to defaults,
)
# Configure backtesting engine:
MyCerebro = bt.Cerebro()
# Note (again): all kwargs here will go stright to strategy parameters dict,
# that is our responsibility to consisit observation shape / bounds with what our get_state() computes.
MyCerebro.addstrategy(
MyStrategy,
state_shape={
'raw': spaces.Box(low=-10, high=10, shape=(4,4)),
'price_gradients': spaces.Box(low=0, high=1, shape=(4,4))
},
drawdown_call=99,
skip_frame=5,
)
# Than everything is very backtrader'esque:
MyCerebro.broker.setcash(100.0)
MyCerebro.broker.setcommission(commission=0.002)
MyCerebro.addsizer(bt.sizers.SizerFix, stake=20)
MyCerebro.addanalyzer(bt.analyzers.DrawDown)
# Finally:
env = BTgymEnv(
dataset=MyDataset,
episode_duration={'days': 0, 'hours': 5, 'minutes': 55}, # ignored!
engine=MyCerebro,
strategy='NotUsed', # ignored!
state_shape=(9, 99), # ignored!
start_cash=1.0, # ignored!
render_modes=['episode', 'human', 'price_gradients'],
render_state_as_image=True,
render_ylabel='Price Gradient',
render_size_human=(10,4),
render_size_state=(10,4),
render_plotstyle='ggplot',
verbose=0,
)
# Look again...
under_the_hood(env)
env.reset()
take_some_steps(env, 100)
render_all_modes(env)
# Clean up:
env.close()
Explanation: <a name="full"></a>Full Throttle setup:
Summon Backtrader power;
Wich-is-what: pay attention to arguments being used or ignored.
End of explanation |
8,472 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercise - Functional Programming
Q
Step1: Ans | Python Code:
names = ["Aalok", "Chandu", "Roshan", "Manish"]
for i in range(len(names)):
names[i] = hash(names[i])
print(names)
Explanation: Excercise - Functional Programming
Q: Try rewriting the code below as a map. It takes a list of real names and replaces them with code names produced using a more robust strategy.
End of explanation
secret_names = map(hash, names)
print(secret_names)
Explanation: Ans:
End of explanation |
8,473 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Updating final reports for CEC'2015
First, using the webpage pdftables the PDF tables are translate to Excel format.
First, we have put all results in a Excel file.
Then, we are going to use the pandas library to read the data.
Step1: Then, we read the Sheet right (the 2 in the example)
Step3: I need a function that get the right position of the Data Frame, considering the function number and the accuracy level.
Step4: Get all the data for comparisons
Step6: Get function that calculate the points for function
Step8: The following function return the scores for each position following the Formula 1 criterion.
Step10: Finally, a function get_scores combine the previous two functions.
Step11: Putting all together
Init the initial parameters.
Step12: Now, we prepare the library.
Step13: First, Comparing by group of variables
It is very simple
Step14: Now, we are going to show for group for the Excel file.
Step15: By Groups
Step16: By Category | Python Code:
import pandas as pd
table_alg =pd.ExcelFile("results_cec2015.pdf.xlsx")
Explanation: Updating final reports for CEC'2015
First, using the webpage pdftables the PDF tables are translate to Excel format.
First, we have put all results in a Excel file.
Then, we are going to use the pandas library to read the data.
End of explanation
print(table_alg.sheet_names)
df=table_alg.parse(table_alg.sheet_names[1])
print(df)
Explanation: Then, we read the Sheet right (the 2 in the example)
End of explanation
def get_best_pos(function, accuracy_level=0):
This function get the final position from the function and the accurary_level required (0 = 1.2e5, 1=6e5, 2=3e6)
Keyword arguments:
function -- function.
accuracy_level -- level of accuracy (0 to 2)
f = function - 1
r = (f % 5)
c = (f / 5)*16+accuracy_level*5
return r+1, c
for f in range(1, 16):
print(f, get_best_pos(f))
def parse_table_orig(df):
accuracies = ['1.2e5', '6e5', '3e6']
best = pd.DataFrame(columns=accuracies)
for acc_index, acc in enumerate(accuracies):
val = []
for f in range(1,16):
r, c = get_best_pos(f, acc_index)
val.append(df['f {}'.format(r)][c])
best[acc] = val
best.index = ['f{:02d}'.format(i+1) for i in range(15)]
return best
df=table_alg.parse(table_alg.sheet_names[1])
Explanation: I need a function that get the right position of the Data Frame, considering the function number and the accuracy level.
End of explanation
df = {}
for alg in table_alg.sheet_names:
df[alg]=table_alg.parse(alg)
# if "_orig" in alg:
# df[alg] = parse_table_orig(df[alg])
Explanation: Get all the data for comparisons
End of explanation
def calculate_points(fun, dfs, algs, acc=0):
Returns the ranking in positions for the function and algorithm desired.
Keyword parameters:
- fun -- function to compare.
- dfs -- hash with the dataframes for algorithm.
- algs -- algorithms to compare (must be into df).
- acc -- accuracy level (from 0 to 2).
values = pd.DataFrame(columns=algs)
for alg in algs:
df = dfs[alg]
values[alg] = [df[acc][fun-1]]
ranks = values.rank(1, method='min')
return np.array(ranks, dtype=np.int).reshape(len(algs))
Explanation: Get function that calculate the points for function
End of explanation
def get_f1_score(num_algs):
Return a np.array with the scoring criterio by position from the Formula 1, in which
the first 10 items have scores.
The array have num_algs positions.
- If num_algs is lower than 10, it is shorten.
- If num_algs is greater than 10, it is increased with 0s.
f1 = np.array([25, 18, 15, 12, 10, 8, 6, 4, 2, 1])
if len(algs) < len(f1):
f1 = f1[:len(algs)]
else:
f1 += np.zeros(len(algs))
return f1
Explanation: The following function return the scores for each position following the Formula 1 criterion.
End of explanation
def get_scores(df, algs, funs, accuracies):
This function returns the scores for the algs 'algs', functions 'funs' and accuracies 'accuracies'.
Keyword parameters:
df -- dataframe with the data.
algs -- algorithms to compare (must be included in df).
funs -- functions to compare.
accuracies -- accuracy levels to compare (in string).
size = len(algs)
f1 = get_f1_score(size)
result = np.zeros(size)
for acc in accuracies:
for i in funs:
result += f1[calculate_points(i, df, algs, acc)-1]
results_alg = {alg: res for alg, res in zip(algs, result)}
return results_alg
Explanation: Finally, a function get_scores combine the previous two functions.
End of explanation
algs = filter(lambda x: not "_orig" in x, df.keys())
algs = filter(lambda x: not "_old" in x, algs)
algs = [alg for alg in algs]
print(algs)
#algs = ['IHDELS', 'MOS']
accuracies = ['1.2e5', '6e5', '3e6']
funs_group = [range(1, 4), range(4, 8), range(8, 12), range(12, 15), [15]]
funs_group_names = ['Fully Separable', 'Partially Separable I', 'Partially Separable II', 'Overlapping', 'Non-separable']
Explanation: Putting all together
Init the initial parameters.
End of explanation
from matplotlib import pyplot as plt
import seaborn as sns
# Increase font
sns.set(font_scale=1.5)
# Put white grid
sns.set_style("whitegrid")
Explanation: Now, we prepare the library.
End of explanation
for fid, funs in enumerate(funs_group):
title = funs_group_names[fid]
results = get_scores(df, algs, funs, accuracies)
results_df = pd.Series(results)
plt.figure()
results_df.plot(kind='bar', title=title)
locs, labels = plt.xticks()
plt.setp(labels, rotation=90)
Explanation: First, Comparing by group of variables
It is very simple
End of explanation
funs_categories = dict(zip(funs_group_names, funs_group))
excel = pd.ExcelWriter("results.xls")
def print_results(df, algs, title, funs, accuracies, style='b'):
results = get_scores(df, algs, funs, accuracies)
results_df = pd.Series(results)
plt.figure()
pd.DataFrame(results_df, columns=['Results']).to_excel(excel, title)
results_df.plot(kind='bar', title=title, color=style)
locs, labels = plt.xticks()
plt.setp(labels, rotation=90)
Explanation: Now, we are going to show for group for the Excel file.
End of explanation
fig_names = ['cat1', 'cat2', 'cat3', 'cat4', 'cat5']
styles = ['blue', 'orange', 'yellow', 'green', 'brown']
for id, title in enumerate(funs_group_names):
fig_name = fig_names[id]
print_results(df, algs, title, funs_categories[title], accuracies, styles[id])
plt.savefig(fig_name, bbox_inches='tight')
Explanation: By Groups
End of explanation
def results_by_accuracy(acc):
styles = ['blue', 'orange', 'yellow', 'green', 'brown']
funs = range(15)
results_cat = pd.DataFrame(columns=funs_group_names)
for id, fun in enumerate(funs_group_names):
fig_name = 'fe{}'.format(acc)
results = get_scores(df, algs, funs_categories[fun], [acc])
results_cat[fun] = pd.Series(results)
title = 'Results after {} Fitness Evaluations'.format(acc)
results_cat.plot(kind='bar', title=title, stacked=True)
results_cat.to_excel(excel, acc)
for acc in accuracies:
plt.figure()
results_by_accuracy(acc)
fname = '{}.png'.format(acc.replace('.',''))
lgd = plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
locs, labels = plt.xticks()
plt.setp(labels, rotation=90)
plt.savefig(fname, bbox_extra_artists=(lgd,), bbox_inches='tight')
def results_by_all():
styles = ['blue', 'orange', 'yellow', 'green', 'brown']
funs = range(15)
results_cat = pd.DataFrame(columns=funs_group_names)
for id, fun in enumerate(funs_group_names):
fig_name = 'fall'
results = get_scores(df, algs, funs_categories[fun], accuracies)
results_cat[fun] = pd.Series(results)
title = 'Overall score'
results_cat.plot(kind='bar', title=title, stacked=True)
results_cat.to_excel(excel, 'all')
plt.figure()
results_by_all()
fname = 'all.png'
lgd = plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
locs, labels = plt.xticks()
plt.setp(labels, rotation=90)
plt.savefig(fname, bbox_extra_artists=(lgd,), bbox_inches='tight')
excel.save()
Explanation: By Category
End of explanation |
8,474 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Civis Python API Client
Stephen Hoover, Lead Data Scientist<br>
August 2017
Civis Platform provides you with a Data Science API which gives you direct access to Civis Platform's cloud-based infrastructure, data science tools, and data. You can query large datasets, train a dozen models at once (and set them to re-train on a schedule), and create or update dashboards to show off your work. Using the Data Science API, you can write code in scripts or notebooks as if you're working on your laptop, but with all the resources of the Civis Platform.
Civis Analytics provides API clients for both Python and R. This notebook introduces you to the abstractions used in the Civis Python API Client and provides a few use examples. If you aren't running this notebook in the Civis Platform, follow the instructions in Section A.3 for setup instructions. If you aren't a Civis Platform subscriber, sign up for a free trial today!
Step1: Table of Contents
What's Available?
Data Access<br>
2.1 Reading a table from Civis<br>
2.2 Writing tables to Civis<br>
2.3 What is the CivisFuture?<br>
2.4 Executing a SQL query<br>
2.5 Writing and reading files<br>
2.6 Other useful I/O functions<br>
Machine Learning<br>
3.1 Training your model<br>
3.2 Making predictions<br>
Direct API Access<br>
4.1 Tables<br>
4.2 Paginated responses<br>
4.3 The API Response<br>
Build something new<br>
5.1 Creating and running Container Scripts<br>
5.2 Custom Scripts<br>
A Data Science API
Appendix
A.1 What is an API client?<br>
A.2 Rate limits and retries<br>
A.3 Using the Python API client outside of Civis<br>
A.4 Where can I go from here?<br>
<a id='whats-available'></a>
1. What's Available?
The Python API client has two kinds of functionality.
First, you can interact directly with the Civis Data Science API by using a civis.APIClient object. This translates the native REST API into Python code, so that you can pass parameters to functions rather than writing out http requests by hand. These functions all immediately return the response from Civis Platform.
The second kind of functionality is higher-level functions which make common tasks easier, such as copying a table from Redshift into a pandas.DataFrame, or training a machine learning model. You can access these functions through the civis namespace.
- civis.io
Step2: <a id='data-access'></a>
2. Data Access
You can use the functions provided in the civis.io namespace to move data in and out of Civis Platform. Here's a few examples of how that works. This notebook assumes that all of the data we'll use are in the same database, defined below. If your data aren't in the "Civis Database" database, change the following cell to use the correct name.
Step3: <a id='reading-table'></a>
2.1 Reading a table from Civis
Sometimes you need to move a table from your Civis Redshift cluster into RAM so that you can manipulate it. The civis.io.read_civis function will do that for you.
This is the first example of a wrapper function, which is a special piece of code designed to do a common task (in this case, read a table from your Civis Redshift cluster and return it as a list or a pandas.DataFrame). There are a number of wrapper functions in civis.io designed to assist with getting data in and out of Civis Platform. They will make your life easier than e.g. working with the raw API endpoints or clicking through the GUI. The recommended best practice is to use wrapper functions whenever possible, rather than the client directly.
Step4: Let's read out a table of data on public transit ridership in Chicago. The docstring tells us that unless use_pandas is True (default=False), the function will return a list. We want a DataFrame here, so set use_pandas to True.
Step5: Now that we have the table in our notebook, we can inspect it and use Python functions to modify it. Let's turn it into a table of ridership by month for each station, starting in 2010.
Step6: <a id='writing-tables'></a>
2.2 Writing tables to Civis
Now that we have our modified data, let's put it back into your Redshift cluster. Use the function civis.io.dataframe_to_civis to do the upload. We'll put it into a table in the "scratch" schema. That's a customary location for tables we don't intend to keep around for long.
Step7: <a id='civisfuture'></a>
2.3 What is the CivisFuture?
Notice that, although the civis.io.read_civis function waited until your download was done to finish executing, the civis.io.dataframe_to_civis function returned immediately, even though Civis Platform hadn't finished creating your table. When working with the client, you will often need to start jobs that will take some time to complete. To deal with this, the Civis client includes CivisFuture objects, which allow you to process multiple long running jobs simultaneously.
The CivisFuture object is a subclass of the standard library concurrent.futures.Future object and tracks a Civis Platform run. This abstraction allows you to start multiple jobs at once, rather than wait for one to finish before starting the other. You can keep working while your table creation happens, and only stop to wait (by calling CivisFuture.result() or concurrent.futures.wait) once you reach a step which relies on your run having finished.
Find more information on CivisFuture in the User Guide
Step8: Now let's clean up that scratch table. We don't need to wait for Civis Platform to finish, so this time we won't block on the output of civis.io.query_civis. Civis Platform will keep running the table action as we move to the next cells of this notebook.
Step9: <a id='file-io'></a>
2.5 Writing and reading files
You can store arbitrary files in Civis Platform by using civis.io.file_to_civis to store and civis.io.civis_to_file to retrieve data. Let's grab the current status of Chicago's bike share network and store the data in a Civis File.
Step10: Upload your data by sending it as an open file object to civis.io.file_to_civis.
Step11: Then you can use that file ID to download the file into a new buffer.
Step12: Because retrieving JSON from a Civis File is such a common occurence, there's a simpler function for files you know are formatted in JSON
Step13: <a id='other-io'></a>
2.6 Other useful I/O functions
The following functions handle moving structured data to and from Civis
Step15: Create the training set by joining the Brandable upgrade labels to the customer data.
Step16: CivisML automatically evaluates the predictive performance of each model using several standard metrics. Now that we've started some models training, we'll check the area under the ROC curve of each model as it finishes training. Once all of the models finish training, we'll pull out the best of them.
Step17: <a id='ml-predict'></a>
3.2 Making predictions
Once you've trained a model, you can use it to make predictions. CivisML will automatically parallelize predictions when you have a large dataset, so no matter how big the dataset, you won't need to wait too long. Let's use the best model we found from the previous step to make predictions about which users are most likely to upgrade in the future.
Step18: If you wanted to store the predictions in a Redshift table, you could have provided an output_table parameter. Since this is a relatively small dataset, it's faster to skip the table write and pull down the predictions directly. Let's find the 5% of users who are most likely to upgrade.
Step19: <a id='raw-api'></a>
4. Direct API Access
You can inspect the client object and read documentation about individual functions just as you would with any other Python code. For example, you can tab-complete after typing "client." to get a list of API "endpoints", and further tab-complete from "client.users." to find a list of API calls related to users. Here's the way you can ask Civis Platform who it thinks you are
Step20: <a id='tables'></a>
4.1 Tables
Next, let's list the tables available in a single schema. The Civis Data Science API often uses unique IDs instead of names, and the APIClient gives you convenience functions to look up those IDs if you know the name. In this case, we need to know the database ID of our database, rather than the name.
Step21: Now let's use the API to look up some information about the CTA daily ridership table. my_tables is a list of API responses. Because searching through lists like this is common, the Civis Python API client provides helper functions (civis.find and civis.find_one) which will locate the entry or entries you're interested in. Let's find the ID of the "cta_ridership_daily" table and use that to look up the names and types of each of the columns.
Step22: <a id='pagination'></a>
4.2 Paginated responses
Some endpoints may contain a lot of data which Civis Platform will only serve over multiple requests. For example, client.tables.list() will only return information on a maximum of up to 1000 tables in a single call (the default is 50). Therefore, if we need to collect data on 4000 different tables, we'll need to make at least 4 seperate requests to get all of the data. (Use the page_num argument to select additional "pages" of data.) To make this easier, the client includes a special iterator parameter on endpoints which may require making multiple requests to get all of the data. These requests could require making a large number of API calls, so use iterator=True sparingly!
Let's pretend that the "public" schema has more tables than we want to list at once and iterate through it to find all of the tables with 5 columns.
Step23: <a id='api-response'></a>
4.3 The API Response
Every time you communicate with the Civis Data Science API, you get a response. In fact, it's a civis.response.Response object. Its contents be accessed either like a dictionary or as normal attributes. The Response always comes back immediately, even if it's to acknowledge that you've started something that will take a long time to finish. It will contain either the information you've asked for or an acknowledgement of the action you took. Here's an example of the Response when we ask for the status of the best model we built in section 3.
Step24: <a id='something-new'></a>
5. Build something new
The most flexible way to interact with Civis Platform is by writing your own code and using Civis Platform to run it. For example, you could imagine wanting to write a program that counts from 1 to 100 and replaces every number that's evenly divisible by 3 with "fizz", any number divisible by 5 with "buzz", and numbers divisible by both 3 and 5 with "fizzbuzz". There's no Data Science API function that implements FizzBuzz, so you would need to write that yourself, but you can use Civis Platform to schedule it, share it, and run it in the cloud while you free up your laptop for other purposes. Container Scripts are our general-purpose solution for taking any code and running it in Civis Platform.
Container Scripts become really powerful when you pair the flexibility of bring-your-own-code with the power of the Data Science API. One of our favorite design patterns is writing code that calls the Data Science API as part of a more customized workflow. For example, we might use the Data Science API to pull a table into a pandas dataframe, write special-purpose pandas code for manipulating the dataframe, use the Data Science API again to build a model, write more code to analyze the results of the model, and finally publish those analysis results as a report in Civis Platform. The most sophisticated data science code we write is delivered and shared via Container Scripts because of how easy it is to write software in Python or R (or, really, any language) calling API functions for accessing Civis Platform.
<a id='container-scripts'></a>
5.1 Creating and running Container Scripts
Let's take our earlier of example of checking the status of the Chicago bike share system and package it into a script which we can schedule to run regularly. Here we're writing our task as a function and using cloudpickle, an open-source Python library which can pickle dynamically-defined functions, to send it to Civis Platform. You could also write this code as a text file and run it as a script.
Step26: Now that we've uploaded the function, we tell Civis Platform to run it. A Container Script consists of an environment (the "Container", which is a Docker container) and a bash command to run inside that container. Civis provides some general-purpose Docker images, or you can use any public Docker image. Here we're using "datascience-python". Note that we're using a specific image tag, rather than the default "latest". It's a good practice to set an image tag. The "latest" tag will change with new releases, and that could unexpectedly cause a job which used to work to start failing.
In this example, I'm storing my code in a Civis Platform file, but Container Scripts can also access code which you've stored in GitHub. A file is great for small, quick examples like this, but GitHub is a better way to handle larger or production code. Version control is your friend!
Like many operations with the Civis Data Science API, running a Container Script is two steps -- first, you create the job (with client.scripts.post_containers). Second, you tell Civis Platform to start running the job. You can use the client.scripts.post_containers_runs to start a run (this will return a Response), or you can use the convenience function civis.utils.run_job to start a run. If you use civis.utils.run_job, you'll get back a CivisFuture, which is a convenient way to track when your run has finished.
Step27: We've stored the bike station data as a JSON in Civis Platform, and set a "run output" on the script which read the data. Run outputs are a way for you to transfer data from one job to another. You can inspect this job to find its run outputs, and use the file ID you find there to retrieve the data about the Chicago bike sharing network.
Step28: <a id='custom-scripts'></a>
5.2 Custom Scripts
Remember that prediction we made about which customers are likely to upgrade? We didn't store it in a table at the time. What if we change our minds? We could download it in this notebook and then use civis.io.dataframe_to_civis to make a new table. (Most of the time this will be the right thing to do.) However, we could also use the "Import from URL" Template to create a Custom Script which will do that for us.
If you (or one of your colleagues) has created an especially useful Container Script which you'll want to run over and over, you can turn it into a Template. Once you have access to a templated script (Civis provides a few that we've found useful), you can run it for yourself by creating a "Custom Script". The Custom Script lets you modify a few parameters and then run the code that your colleague wrote.
If you know the template ID of a template script, you can use client.scripts.post_custom to create a new job. As with the Container Script, we'll use civis.utils.run_job to start the run so that we get back a CivisFuture.
Step29: Finally, let's keep the database tidy and delete this table.
Step30: <a id='ds-api'></a>
6. A Data Science API
This has been a whirlwind tour of the Civis Data Science API. Civis Platform has a lot more features than what we've covered here, such as sharing, enhancements, reports, and more. This tour gives you what you need to get started. Use the API client documentation or the API documentation to get a complete picture of everything the API can do, and contact support@civisanalytics.com if you run into trouble. The Civis Data Science API is a powerful toolbox that you can use to build, scale, and deploy your data science workflows!
Appendix
These sections will give you extra context on what's going on behind the scenes with the Civis API client.
<a id='what-is-an-api-client'></a>
A.1 What is an API client?
API | Python Code:
print(f"Using Civis Python API Client version {civis.__version__}.")
Explanation: The Civis Python API Client
Stephen Hoover, Lead Data Scientist<br>
August 2017
Civis Platform provides you with a Data Science API which gives you direct access to Civis Platform's cloud-based infrastructure, data science tools, and data. You can query large datasets, train a dozen models at once (and set them to re-train on a schedule), and create or update dashboards to show off your work. Using the Data Science API, you can write code in scripts or notebooks as if you're working on your laptop, but with all the resources of the Civis Platform.
Civis Analytics provides API clients for both Python and R. This notebook introduces you to the abstractions used in the Civis Python API Client and provides a few use examples. If you aren't running this notebook in the Civis Platform, follow the instructions in Section A.3 for setup instructions. If you aren't a Civis Platform subscriber, sign up for a free trial today!
End of explanation
# Uncomment the following two lines if you run this notebook outside of Civis Platform
#import civis
#client = civis.APIClient()
Explanation: Table of Contents
What's Available?
Data Access<br>
2.1 Reading a table from Civis<br>
2.2 Writing tables to Civis<br>
2.3 What is the CivisFuture?<br>
2.4 Executing a SQL query<br>
2.5 Writing and reading files<br>
2.6 Other useful I/O functions<br>
Machine Learning<br>
3.1 Training your model<br>
3.2 Making predictions<br>
Direct API Access<br>
4.1 Tables<br>
4.2 Paginated responses<br>
4.3 The API Response<br>
Build something new<br>
5.1 Creating and running Container Scripts<br>
5.2 Custom Scripts<br>
A Data Science API
Appendix
A.1 What is an API client?<br>
A.2 Rate limits and retries<br>
A.3 Using the Python API client outside of Civis<br>
A.4 Where can I go from here?<br>
<a id='whats-available'></a>
1. What's Available?
The Python API client has two kinds of functionality.
First, you can interact directly with the Civis Data Science API by using a civis.APIClient object. This translates the native REST API into Python code, so that you can pass parameters to functions rather than writing out http requests by hand. These functions all immediately return the response from Civis Platform.
The second kind of functionality is higher-level functions which make common tasks easier, such as copying a table from Redshift into a pandas.DataFrame, or training a machine learning model. You can access these functions through the civis namespace.
- civis.io : Data input, output, and transfer, as well as SQL queries on Redshift tables
- civis.ml : Machine learning
- civis.parallel : Tools for doing batch computing in Civis Platform
When you start a new Civis Jupyter notebook, you already have the civis namespace imported and a civis.APIClient object named client created and ready to go!
End of explanation
DATABASE = "Civis Database"
Explanation: <a id='data-access'></a>
2. Data Access
You can use the functions provided in the civis.io namespace to move data in and out of Civis Platform. Here's a few examples of how that works. This notebook assumes that all of the data we'll use are in the same database, defined below. If your data aren't in the "Civis Database" database, change the following cell to use the correct name.
End of explanation
# First, use "?" to investigate the parameters of civis.io.read_civis
civis.io.read_civis?
Explanation: <a id='reading-table'></a>
2.1 Reading a table from Civis
Sometimes you need to move a table from your Civis Redshift cluster into RAM so that you can manipulate it. The civis.io.read_civis function will do that for you.
This is the first example of a wrapper function, which is a special piece of code designed to do a common task (in this case, read a table from your Civis Redshift cluster and return it as a list or a pandas.DataFrame). There are a number of wrapper functions in civis.io designed to assist with getting data in and out of Civis Platform. They will make your life easier than e.g. working with the raw API endpoints or clicking through the GUI. The recommended best practice is to use wrapper functions whenever possible, rather than the client directly.
End of explanation
df = civis.io.read_civis(table='public.cta_ridership_daily',
database=DATABASE,
use_pandas=True)
print(f"The table's shape is {df.shape}.")
df.head()
Explanation: Let's read out a table of data on public transit ridership in Chicago. The docstring tells us that unless use_pandas is True (default=False), the function will return a list. We want a DataFrame here, so set use_pandas to True.
End of explanation
import pandas as pd
df['month'] = pd.DatetimeIndex(df['date']).to_period('M')
rides_post_2009 = df[df['month'] >= pd.Period('2010-01', 'M')]
rides_by_month = (rides_post_2009.groupby(['stationname', 'month'])[['rides']]
.sum()
.reset_index())
print(f"The grouped table's shape is {rides_by_month.shape}.")
rides_by_month.head()
Explanation: Now that we have the table in our notebook, we can inspect it and use Python functions to modify it. Let's turn it into a table of ridership by month for each station, starting in 2010.
End of explanation
rbm_tablename = 'scratch.rides_by_month'
fut = civis.io.dataframe_to_civis(
df=rides_by_month,
database=DATABASE,
table=rbm_tablename,
distkey='month',
sortkey1='month',
existing_table_rows='drop',
) # This is non-blocking
print(fut)
fut.result() # This blocks (warning: can take a few minutes to run)
Explanation: <a id='writing-tables'></a>
2.2 Writing tables to Civis
Now that we have our modified data, let's put it back into your Redshift cluster. Use the function civis.io.dataframe_to_civis to do the upload. We'll put it into a table in the "scratch" schema. That's a customary location for tables we don't intend to keep around for long.
End of explanation
station_name = "Washington/Wells"
month = "2015-03"
result = civis.io.query_civis(database=DATABASE,
sql=(f"SELECT rides FROM {rbm_tablename} "
f"WHERE stationname = '{station_name}' "
f"and month = '{month}'"),
).result()
print(f"The {station_name} station had {result['result_rows'][0][0]} riders in {month}.")
Explanation: <a id='civisfuture'></a>
2.3 What is the CivisFuture?
Notice that, although the civis.io.read_civis function waited until your download was done to finish executing, the civis.io.dataframe_to_civis function returned immediately, even though Civis Platform hadn't finished creating your table. When working with the client, you will often need to start jobs that will take some time to complete. To deal with this, the Civis client includes CivisFuture objects, which allow you to process multiple long running jobs simultaneously.
The CivisFuture object is a subclass of the standard library concurrent.futures.Future object and tracks a Civis Platform run. This abstraction allows you to start multiple jobs at once, rather than wait for one to finish before starting the other. You can keep working while your table creation happens, and only stop to wait (by calling CivisFuture.result() or concurrent.futures.wait) once you reach a step which relies on your run having finished.
Find more information on CivisFuture in the User Guide: http://civis-python.readthedocs.io/en/latest/user_guide.html#civis-futures
<a id='executing-sql'></a>
2.4 Executing a SQL query
You can also use functions in the civis.io namespace to run SQL in Civis Platform as if you were working with Query. You can use this same method in your scripts to create or drop tables, assign permissions, or do anything else you would want to do in Query.
Let's use a Query to pull out the July 2016 traffic at one of the stations in downtown Chicago. Here we're immediately asking for the result of the query by calling .result() on the returned CivisFuture.
End of explanation
fut_drop = civis.io.query_civis(database=DATABASE,
sql=f"DROP TABLE IF EXISTS {rbm_tablename}")
Explanation: Now let's clean up that scratch table. We don't need to wait for Civis Platform to finish, so this time we won't block on the output of civis.io.query_civis. Civis Platform will keep running the table action as we move to the next cells of this notebook.
End of explanation
import requests
divvy_api = 'https://feeds.divvybikes.com/stations/stations.json'
bikes = requests.get(divvy_api).json()
print(f"Downloaded data on {len(bikes['stationBeanList'])} stations.")
Explanation: <a id='file-io'></a>
2.5 Writing and reading files
You can store arbitrary files in Civis Platform by using civis.io.file_to_civis to store and civis.io.civis_to_file to retrieve data. Let's grab the current status of Chicago's bike share network and store the data in a Civis File.
End of explanation
import io
import json
buf = io.TextIOWrapper(io.BytesIO()) # `json` writes text
json.dump(bikes, buf)
buf.seek(0)
bike_file_id = civis.io.file_to_civis(buf.buffer, 'Divvy status')
print(f"File uploaded to file number {bike_file_id}.")
Explanation: Upload your data by sending it as an open file object to civis.io.file_to_civis.
End of explanation
buf_down = io.TextIOWrapper(io.BytesIO())
civis.io.civis_to_file(bike_file_id, buf_down.buffer)
buf_down.seek(0)
bikes_down = json.load(buf_down)
bikes == bikes_down
Explanation: Then you can use that file ID to download the file into a new buffer.
End of explanation
bikes_again = civis.io.file_to_json(bike_file_id)
print("The file I stored in Civis has data on "
f"{len(bikes_again['stationBeanList'])} stations.")
Explanation: Because retrieving JSON from a Civis File is such a common occurence, there's a simpler function for files you know are formatted in JSON: civis.io.file_to_json. Similarly, if you know that a file is a CSV, you could use civis.io.file_to_dataframe to access it as a pandas.DataFrame.
End of explanation
# Define the algorithms and model parameters to use
MODELS = ['sparse_logistic', 'random_forest_classifier', 'extra_trees_classifier']
DV = 'upgrade' # Column name in the training table
PKEY = 'brandable_user_id' # Column name in the training table
EXCLUDE = ['residential_zip'] # Don't train on these columns, if present
training_table = 'brandable_upgrades.brandable_training_data'
Explanation: <a id='other-io'></a>
2.6 Other useful I/O functions
The following functions handle moving structured data to and from Civis:
* civis_to_csv(filename, sql, database[, ...]) Export data from Civis to a local CSV file.
* csv_to_civis(filename, database, table[, ...]) Upload the contents of a local CSV file to Civis.
* dataframe_to_civis(df, database, table[, ...]) Upload a pandas.DataFrame into a Civis table.
* read_civis(table, database[, columns, ...]) Read data from a Civis table.
* read_civis_sql(sql, database[, use_pandas, ...]) Read data from Civis using a custom SQL string.
<a id='ml'></a>
3. Machine Learning
In this section, we will walk through how to build a model using CivisML, a Civis Platform feature with a high-level interface in the Civis API client.
You can use CivisML to leverage Civis Platform's infrastructure to do predictive modeling. CivisML is built on scikit-learn, so you have lots of flexibility to define your own modeling algorithms. Check out the official documentation for more information, or read the example on our blog.
<a id='ml-train'></a>
3.1 Training your model
To use CivisML, start by constructing a civis.ml.ModelPipeline object. The ModelPipeline defines the algorithm you want to use, as well as the name of the dependent variable. You can then call the train and predict methods to learn from your data or to make new predictions.
Let's use the API client to help us predict which customers are most likely to upgrade to a premium service, using the demo "Brandable" dataset. We can quickly start three different models training by looping over the parameters we want for each.
For this example, we're using Civis's pre-defined algorithms, but if those don't fit your problem, you can create your own algorithms to use.
End of explanation
sql = fDROP TABLE IF EXISTS {training_table};
CREATE TABLE {training_table} AS
(SELECT u.*, p.upgrade FROM brandable_customers.brandable_all_users u
JOIN brandable_customers.brandable_pilot p
ON p.brandable_user_id = u.brandable_user_id)
civis.io.query_civis(database=DATABASE, sql=sql).result().state
from civis.ml import ModelPipeline
models = {}
for m in MODELS:
name = f'"{m}" model for {DV}'
model = ModelPipeline(model=m,
dependent_variable=DV,
primary_key=PKEY,
excluded_columns=EXCLUDE,
model_name=name)
train = model.train(table_name=training_table, database_name=DATABASE)
models[train] = model
print(f'Started training the "{name}" model.')
Explanation: Create the training set by joining the Brandable upgrade labels to the customer data.
End of explanation
from concurrent.futures import as_completed
aucs = {}
for train in as_completed(models):
if train.succeeded():
print(f"Model# {train.train_job_id} on DV "
f"\"{train.metadata['data']['target_columns'][0]}\" "
f'("{models[train].model_name}") '
f"has a ROC AUC of {round(train.metrics['roc_auc'], 3)}.")
aucs[train.metrics['roc_auc']] = train
best_model = models[aucs[max(aucs)]]
print(f"The \"{best_model.model_name}\" model has the best ROC AUC.")
Explanation: CivisML automatically evaluates the predictive performance of each model using several standard metrics. Now that we've started some models training, we'll check the area under the ROC curve of each model as it finishes training. Once all of the models finish training, we'll pull out the best of them.
End of explanation
score_table = 'scratch.my_scores_table'
predict = best_model.predict(table_name='brandable_customers.brandable_all_users',
database_name=DATABASE)
Explanation: <a id='ml-predict'></a>
3.2 Making predictions
Once you've trained a model, you can use it to make predictions. CivisML will automatically parallelize predictions when you have a large dataset, so no matter how big the dataset, you won't need to wait too long. Let's use the best model we found from the previous step to make predictions about which users are most likely to upgrade in the future.
End of explanation
predict.table.head()
n_users = len(predict.table)
most_likely = (predict.table
.sort_values(by="upgrade_1", ascending=False))[:int(0.05 * n_users)]
print(f'The most likely {len(most_likely)} of {len(predict.table)} users to upgrade '
f'have scores ranging from {most_likely.iloc[-1, 0]} to {most_likely.iloc[0, 0]}.')
Explanation: If you wanted to store the predictions in a Redshift table, you could have provided an output_table parameter. Since this is a relatively small dataset, it's faster to skip the table write and pull down the predictions directly. Let's find the 5% of users who are most likely to upgrade.
End of explanation
client.users.list_me?
client.users.list_me()
Explanation: <a id='raw-api'></a>
4. Direct API Access
You can inspect the client object and read documentation about individual functions just as you would with any other Python code. For example, you can tab-complete after typing "client." to get a list of API "endpoints", and further tab-complete from "client.users." to find a list of API calls related to users. Here's the way you can ask Civis Platform who it thinks you are:
End of explanation
db_id = client.get_database_id(DATABASE)
my_tables = client.tables.list(database_id=db_id, schema='public')
# Print all tables in the schema
for tt in my_tables:
if tt['name'].startswith('cta'):
print(tt['name'])
Explanation: <a id='tables'></a>
4.1 Tables
Next, let's list the tables available in a single schema. The Civis Data Science API often uses unique IDs instead of names, and the APIClient gives you convenience functions to look up those IDs if you know the name. In this case, we need to know the database ID of our database, rather than the name.
End of explanation
cta_table = civis.find_one(my_tables, name='cta_ridership_daily')
tb_info = client.tables.get(cta_table.id)
col_types = {c.name: c.sql_type for c in tb_info.columns}
print(col_types)
Explanation: Now let's use the API to look up some information about the CTA daily ridership table. my_tables is a list of API responses. Because searching through lists like this is common, the Civis Python API client provides helper functions (civis.find and civis.find_one) which will locate the entry or entries you're interested in. Let's find the ID of the "cta_ridership_daily" table and use that to look up the names and types of each of the columns.
End of explanation
# Traditional method for listing tables
# (set to list a max of 3 different tables)
# This returns multiple tables at the same time.
# Increase the "page_num" to see more tables.
my_three_tables = client.tables.list(database_id=db_id, schema='public',
limit=3, page_num=1)
# Iterating request (will return all available tables, may take some time to run)
# When iterator is set to True, the function yields a single table at a time.
tb_iter = client.tables.list(database_id=db_id, schema='public', iterator=True)
five_col_tbs = [t for t in tb_iter if t['column_count'] == 5]
print(f"Tables with five columns: {[t['name'] for t in five_col_tbs]}.")
Explanation: <a id='pagination'></a>
4.2 Paginated responses
Some endpoints may contain a lot of data which Civis Platform will only serve over multiple requests. For example, client.tables.list() will only return information on a maximum of up to 1000 tables in a single call (the default is 50). Therefore, if we need to collect data on 4000 different tables, we'll need to make at least 4 seperate requests to get all of the data. (Use the page_num argument to select additional "pages" of data.) To make this easier, the client includes a special iterator parameter on endpoints which may require making multiple requests to get all of the data. These requests could require making a large number of API calls, so use iterator=True sparingly!
Let's pretend that the "public" schema has more tables than we want to list at once and iterate through it to find all of the tables with 5 columns.
End of explanation
client.scripts.get_containers_runs(best_model.train_result_.job_id,
best_model.train_result_.run_id)
Explanation: <a id='api-response'></a>
4.3 The API Response
Every time you communicate with the Civis Data Science API, you get a response. In fact, it's a civis.response.Response object. Its contents be accessed either like a dictionary or as normal attributes. The Response always comes back immediately, even if it's to acknowledge that you've started something that will take a long time to finish. It will contain either the information you've asked for or an acknowledgement of the action you took. Here's an example of the Response when we ask for the status of the best model we built in section 3.
End of explanation
import cloudpickle
import io
import json
import os
import requests
def get_bike_status(api_url=divvy_api):
bikes = requests.get(api_url).json()
buf = io.TextIOWrapper(io.BytesIO()) # `json` writes text
json.dump(bikes, buf)
buf.seek(0)
bike_file_id = civis.io.file_to_civis(buf.buffer, 'Divvy status')
print(f"Stored Divvy station data at {bike_file_id}.")
client = civis.APIClient()
job_id = os.environ["CIVIS_JOB_ID"]
run_id = os.environ["CIVIS_RUN_ID"]
client.scripts.post_containers_runs_outputs(job_id, run_id, "File", bike_file_id)
code_file_id = civis.io.file_to_civis(
io.BytesIO(cloudpickle.dumps(get_bike_status)), 'Divvy script')
print(f"Uploaded Divvy function to file {code_file_id}.")
Explanation: <a id='something-new'></a>
5. Build something new
The most flexible way to interact with Civis Platform is by writing your own code and using Civis Platform to run it. For example, you could imagine wanting to write a program that counts from 1 to 100 and replaces every number that's evenly divisible by 3 with "fizz", any number divisible by 5 with "buzz", and numbers divisible by both 3 and 5 with "fizzbuzz". There's no Data Science API function that implements FizzBuzz, so you would need to write that yourself, but you can use Civis Platform to schedule it, share it, and run it in the cloud while you free up your laptop for other purposes. Container Scripts are our general-purpose solution for taking any code and running it in Civis Platform.
Container Scripts become really powerful when you pair the flexibility of bring-your-own-code with the power of the Data Science API. One of our favorite design patterns is writing code that calls the Data Science API as part of a more customized workflow. For example, we might use the Data Science API to pull a table into a pandas dataframe, write special-purpose pandas code for manipulating the dataframe, use the Data Science API again to build a model, write more code to analyze the results of the model, and finally publish those analysis results as a report in Civis Platform. The most sophisticated data science code we write is delivered and shared via Container Scripts because of how easy it is to write software in Python or R (or, really, any language) calling API functions for accessing Civis Platform.
<a id='container-scripts'></a>
5.1 Creating and running Container Scripts
Let's take our earlier of example of checking the status of the Chicago bike share system and package it into a script which we can schedule to run regularly. Here we're writing our task as a function and using cloudpickle, an open-source Python library which can pickle dynamically-defined functions, to send it to Civis Platform. You could also write this code as a text file and run it as a script.
End of explanation
from concurrent.futures import wait
cmd = fcivis files download {code_file_id} myscript.pkl;
python -c "import cloudpickle; cloudpickle.load(open(\\\"myscript.pkl\\\", \\\"rb\\\"))()"
container_job = client.scripts.post_containers(
required_resources = {"cpu": 256, "memory": 512, "diskSpace": 2},
name="Divvy download script",
docker_command = cmd,
docker_image_name = "civisanalytics/datascience-python",
docker_image_tag = "3.1.0")
run = civis.utils.run_job(container_job.id)
wait([run])
Explanation: Now that we've uploaded the function, we tell Civis Platform to run it. A Container Script consists of an environment (the "Container", which is a Docker container) and a bash command to run inside that container. Civis provides some general-purpose Docker images, or you can use any public Docker image. Here we're using "datascience-python". Note that we're using a specific image tag, rather than the default "latest". It's a good practice to set an image tag. The "latest" tag will change with new releases, and that could unexpectedly cause a job which used to work to start failing.
In this example, I'm storing my code in a Civis Platform file, but Container Scripts can also access code which you've stored in GitHub. A file is great for small, quick examples like this, but GitHub is a better way to handle larger or production code. Version control is your friend!
Like many operations with the Civis Data Science API, running a Container Script is two steps -- first, you create the job (with client.scripts.post_containers). Second, you tell Civis Platform to start running the job. You can use the client.scripts.post_containers_runs to start a run (this will return a Response), or you can use the convenience function civis.utils.run_job to start a run. If you use civis.utils.run_job, you'll get back a CivisFuture, which is a convenient way to track when your run has finished.
End of explanation
remote_output_file_id = client.scripts.list_containers_runs_outputs(
container_job.id, run.poller_args[1])[0].object_id
print(f"Bike data are stored at file# {remote_output_file_id}.")
import pprint
station_data = civis.io.file_to_json(remote_output_file_id)
for station in station_data['stationBeanList']:
if station['stationName'] == 'Franklin St & Monroe St':
pprint.pprint(station)
Explanation: We've stored the bike station data as a JSON in Civis Platform, and set a "run output" on the script which read the data. Run outputs are a way for you to transfer data from one job to another. You can inspect this job to find its run outputs, and use the file ID you find there to retrieve the data about the Chicago bike sharing network.
End of explanation
prediction_tablename = 'scratch.brandable_predictions'
template_id = civis.find_one(client.templates.list_scripts(limit=1000),
name='Import from URL').id
url = client.files.get(predict.metadata['output_file_ids'][0])['file_url']
upgrade_prediction_import = client.scripts.post_custom(
from_template_id=template_id,
name="Import Brandable Predictions",
arguments={'URL': url,
'TABLE_NAME': prediction_tablename,
'IF_EXISTS': 'drop',
'DATABASE_NAME': DATABASE})
import_fut = civis.utils.run_job(upgrade_prediction_import.id)
import_fut.result()
Explanation: <a id='custom-scripts'></a>
5.2 Custom Scripts
Remember that prediction we made about which customers are likely to upgrade? We didn't store it in a table at the time. What if we change our minds? We could download it in this notebook and then use civis.io.dataframe_to_civis to make a new table. (Most of the time this will be the right thing to do.) However, we could also use the "Import from URL" Template to create a Custom Script which will do that for us.
If you (or one of your colleagues) has created an especially useful Container Script which you'll want to run over and over, you can turn it into a Template. Once you have access to a templated script (Civis provides a few that we've found useful), you can run it for yourself by creating a "Custom Script". The Custom Script lets you modify a few parameters and then run the code that your colleague wrote.
If you know the template ID of a template script, you can use client.scripts.post_custom to create a new job. As with the Container Script, we'll use civis.utils.run_job to start the run so that we get back a CivisFuture.
End of explanation
civis.io.query_civis(database=DATABASE, sql=f"DROP TABLE IF EXISTS {prediction_tablename}")
Explanation: Finally, let's keep the database tidy and delete this table.
End of explanation
civis.civis.RETRY_CODES
Explanation: <a id='ds-api'></a>
6. A Data Science API
This has been a whirlwind tour of the Civis Data Science API. Civis Platform has a lot more features than what we've covered here, such as sharing, enhancements, reports, and more. This tour gives you what you need to get started. Use the API client documentation or the API documentation to get a complete picture of everything the API can do, and contact support@civisanalytics.com if you run into trouble. The Civis Data Science API is a powerful toolbox that you can use to build, scale, and deploy your data science workflows!
Appendix
These sections will give you extra context on what's going on behind the scenes with the Civis API client.
<a id='what-is-an-api-client'></a>
A.1 What is an API client?
API: Application Programming Interface
* A set of tools for accessing Civis Platform functionality. An API is an official way for two pieces of code to talk to each other
* Civis Platform itself works by issuing API calls, which are based on HTTP
* But HTTP calls are unwieldy, so the API clients provide create a more streamlined way of making these requests
* The API clients can be run interactively or in a script
There are Civis API clients in Python and R.
Everything you can do with an API client is supported by a Civis Data Science API Endpoint. You can find complete documentation on these endpoints here: https://api.civisanalytics.com
RESTful API conventions
The Civis Data Science API is "RESTful". That means it adheres to a set of conventions about the components of the API and their relationships. The world wide web uses REST conventions.
The API understands some basic HTTP "verbs":
* GET → Retrieve information on objects or members [get, list]
* POST → Create a new item or entry in an item [create]
* PUT → Replace something [update]
* DELETE → Delete [delete]
HTTP Status Codes
When you send a request to an API, it will give you a status code. Common codes include:
100-level codes: Informational
200-level codes: Success
200 OK
300-level codes: Redirection
400-level codes: Client error
400 Bad request
401 Unauthorized (authentication failed)
403 Forbidden (similar to 401)
404 File not found
408 Request timeout
409 Conflict in the request, such as an edit conflict
429 Too many requests: You need to wait before you can use the API again
500-level codes: Server error
500 Internal server error
For example, you might see this error if you try to call a list endpoint with page_num=0:
CivisAPIError: (400) invalid 'page_num' -1 - must be an integer greater than zero
The API client has translated the API's reply into a Python exception. The Response object for that error is:
{'code': 400,
'error': 'invalid',
'errorDescription': "invalid 'page_num' 0 - must be an integer greater than zero"}
<a id='retries'></a>
A.2 Rate limits and retries
If you query the Civis Data Science API too frequently, Civis Platform may return a 429 error response, indicating that you need to wait a while before you can make another request. The Python API client will automatically wait and resend your request when your rate limit refreshes, so there's nothing for you to do. Be aware that too many requests too fast will make your code wait for a while.
Currently, the rate limit is 1000 requests per 5 minutes. You can check your rate limit by looking at the Response.headers['X-RateLimit-Limit'] on any Response object that you get back. You can check out the Response.calls_remaining if you're curious how many API calls you have left before you get a time out.
The Python API client will automatically retry on certain 500-level errors as well. That will give your code extra reliability when using the API client over the raw API. The full list of HTTP status codes which the client will retry are:
End of explanation |
8,475 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC
Step1: Graph regularization for image classification using synthesized graphs
By Sayak Paul
<br>
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Dependencies and imports
Step3: Flowers dataset
The flowers dataset contains a total of 3670 images of flowers categorized into 5 classes -
daisy
dandelion
roses
sunflowers
tulips
The dataset is balanced i.e., each class contains roughly the same number of examples. In the cells below, we first load the dataset using TensorFlow Datasets and then visualize a couple of examples from the dataset.
The dataset does not come pre-split into training, validation, and test splits. However, when downloading the dataset, we can specify a splitting ratio. Here we'll be using a split of 85
Step4: After downloading the dataset and splitting it, we can visualize a few samples from it.
Step5: In the next cell, we determine the number of samples present in the splits.
Step7: Graph construction
Graph construction involves creating embeddings for the images and then using
a similarity function to compare the embeddings.
Create sample embeddings
We will use a pre-trained DenseNet121 model to create embeddings in the
tf.train.Example format for each sample in the input. We will store the
resulting embeddings in the TFRecord format along with an additional feature
that represents the ID of each sample. This is important and will allow us match
sample embeddings with corresponding nodes in the graph later. We'll start this section with a utility function to build a feature extraction model.
One important detail to note here is that we are using random projections in order to reduce the dimensionality of the final vector coming out of the pre-trained model. The final embedding vector is 1024-dimensional. When the size of the dataset is large, such high-dimensional vectors can consume a lot of memory. Hence depending on your use-case, it might be a good idea to further reduce the dimensionality.
Step15: We encourage you to try out other pre-trained models available in the tf.keras.applications module and also on TensorFlow Hub. We'll now write a couple of utility functions to create the sample embeddings for graph construction.
Step16: Graph building
Now that we have the sample embeddings, we will use them to build a similarity
graph, i.e, nodes in this graph will correspond to samples and edges in this
graph will correspond to similarity between pairs of nodes.
Neural Structured Learning provides a graph building library to build a graph
based on sample embeddings. It uses
cosine similarity as the
similarity measure to compare embeddings and build edges between them. It also
allows us to specify a similarity threshold, which can be used to discard
dissimilar edges from the final graph. In this example, using 0.7 as the
similarity threshold and 12345 as the random seed, we end up with a graph that
has 987,246 bi-directional edges. Here we're using the graph builder's support
for locality-sensitive hashing
(LSH) to speed up graph building. For details on using the graph builder's LSH
support, see the
build_graph_from_config
API documentation.
Step17: Each bi-directional edge is represented by two directed edges in the output TSV
file, so that file contains 493,623 * 2 = 98,246 total lines
Step21: Sample features
We create sample features for our problem using the tf.train.Example format
and persist them in the TFRecord format. Each sample will include the
following three features
Step22: A note on create_records()
Step23: Base model
We'll first build a model without graph regularization. We'll use a simple convolutional neural network (CNN) for the purpose of this tutorial. However, this can be easily replaced with more sophisticated network architectures.
Global variables
Step25: Hyperparameters
We will use an instance of HParams to inclue various hyperparameters and
constants used for training and evaluation. We briefly describe each of them
below
Step28: Prepare the data
Now, we will prepare our dataset with TensorFlow's data module (tf.data). This will include parsing the TFRecords, structuring them, batching them, and shuffling them if necessary.
In the next two cells, we first define a default value for the image examples which will come in handy when parsing the neighbor examples. Then we write a utility function to create the tf.data.Dataset objects which will be fed to our model in a moment.
Step29: Visualization
In this section, we visualize the neighbor images as computed by Neural Structured Learning during building the graph.
Step31: In the figure above, weight denotes the similarity strength of the examples.
Model training (no graph regularization)
With our datasets prepared, we are now ready to train our shallow CNN model without graph regularization.
Step32: After building and initializing the model in Keras, we can compile it and finally train it.
Step33: Plot training metrics
Step34: Graph regularization
We are now ready to try graph regularization using the base model that we built
above. We will use the GraphRegularization wrapper class provided by the
Neural Structured Learning framework to wrap the base (CNN) model to include
graph regularization. The rest of the steps for training and evaluating the
graph-regularized model are similar to that of the base model.
Create graph-regularized model
To assess the incremental benefit of graph regularization, we will create a new
base model instance. This is because model has already been trained for a few
iterations, and reusing this trained model to create a graph-regularized model
will not be a fair comparison for model.
Step35: Plot training metrics of the graph-regularized model | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 Google LLC
End of explanation
!pip install --quiet neural-structured-learning
!pip install --quiet tensorflow-hub
Explanation: Graph regularization for image classification using synthesized graphs
By Sayak Paul
<br>
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/graph_keras_cnn_flowers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/graph_keras_cnn_flowers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
This notebook is the counterpart of Graph regularization for sentiment
classification using synthesized graphs for image classification. In this notebook, we will build a flower classification model that can categorize images of flowers into discrete classes such as sunflowers, roses,
tulips, etc.
We will demonstrate the use of graph regularization in this notebook by building a graph from the given input. The general recipe for building a graph-regularized model using the Neural Structured Learning (NSL) framework when the input does not contain an explicit graph is as follows:
Create embeddings for each image sample in the input. This can be done using
pre-trained models such as EfficientNet,
Inception,
BiT etc.
Build a graph based on these embeddings by using a similarity metric such as
the 'L2' distance, 'cosine' distance, etc. Nodes in the graph correspond to
samples and edges in the graph correspond to similarity between pairs of
samples.
Generate training data from the above synthesized graph and sample features.
The resulting training data will contain neighbor features in addition to
the original node features.
Create a neural network as a base model using the Keras sequential,
functional, or subclass API.
Wrap the base model with the GraphRegularization wrapper class, which is
provided by the NSL framework, to create a new graph Keras model. This new
model will include a graph regularization loss as the regularization term in
its training objective.
Train and evaluate the graph Keras model.
Note: We expect that it would take readers about 1 hour to go through this
tutorial.
Requirements
Install the Neural Structured Learning package.
Install tensorflow-hub.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import neural_structured_learning as nsl
import tensorflow as tf
import tensorflow_datasets as tfds
# Resets notebook state
tf.keras.backend.clear_session()
tfds.disable_progress_bar()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print(
"GPU is",
"available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
Explanation: Dependencies and imports
End of explanation
train_ds, validation_ds = tfds.load(
"tf_flowers",
split=["train[:85%]", "train[85%:]"],
as_supervised=True
)
Explanation: Flowers dataset
The flowers dataset contains a total of 3670 images of flowers categorized into 5 classes -
daisy
dandelion
roses
sunflowers
tulips
The dataset is balanced i.e., each class contains roughly the same number of examples. In the cells below, we first load the dataset using TensorFlow Datasets and then visualize a couple of examples from the dataset.
The dataset does not come pre-split into training, validation, and test splits. However, when downloading the dataset, we can specify a splitting ratio. Here we'll be using a split of 85:15 for the training and validation sets.
End of explanation
plt.figure(figsize=(10, 10))
for i, (image, label) in enumerate(train_ds.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image)
plt.title(int(label))
plt.axis("off")
Explanation: After downloading the dataset and splitting it, we can visualize a few samples from it.
End of explanation
num_train_examples = tf.data.experimental.cardinality(train_ds)
num_val_examples = tf.data.experimental.cardinality(validation_ds)
print(f"Total training examples: {num_train_examples}")
print(f"Total validation examples: {num_val_examples}")
Explanation: In the next cell, we determine the number of samples present in the splits.
End of explanation
IMG_SIZE = 224 #@param ["128", "224"] {type:"raw"}
PROJECTED_DIM = 128 #@param {type:"slider", min:128, max:1024, step:128}
#@markdown `IMG_SIZE` of 224 denotes the 224 $\times$ 224 resolution.
def create_feature_extractor_model():
Creates a feature extractor model with DenseNet121.
inputs = tf.keras.layers.Input((IMG_SIZE, IMG_SIZE, 3))
densenet_model = tf.keras.applications.DenseNet121(weights="imagenet",
input_shape=(IMG_SIZE, IMG_SIZE, 3),
pooling="avg", include_top=False
)
densenet_model.trainable = False
x = tf.keras.applications.densenet.preprocess_input(inputs)
outputs = densenet_model(x, training=False)
return tf.keras.Model(inputs, outputs, name="densenet_feature_extractor")
feature_extractor = create_feature_extractor_model()
feature_extractor.summary()
Explanation: Graph construction
Graph construction involves creating embeddings for the images and then using
a similarity function to compare the embeddings.
Create sample embeddings
We will use a pre-trained DenseNet121 model to create embeddings in the
tf.train.Example format for each sample in the input. We will store the
resulting embeddings in the TFRecord format along with an additional feature
that represents the ID of each sample. This is important and will allow us match
sample embeddings with corresponding nodes in the graph later. We'll start this section with a utility function to build a feature extraction model.
One important detail to note here is that we are using random projections in order to reduce the dimensionality of the final vector coming out of the pre-trained model. The final embedding vector is 1024-dimensional. When the size of the dataset is large, such high-dimensional vectors can consume a lot of memory. Hence depending on your use-case, it might be a good idea to further reduce the dimensionality.
End of explanation
def resize(image, label):
Resizes the images to (IMG_SIZE x IMG_SIZE) size.
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
# Resize all the images to uniform shape so that they can
# be batched.
train_ds = train_ds.map(resize)
validation_ds = validation_ds.map(resize)
def _int64_feature(value):
Returns int64 tf.train.Feature.
return tf.train.Feature(int64_list=tf.train.Int64List(value=value.tolist()))
def _bytes_feature(value):
Returns bytes tf.train.Feature.
return tf.train.Feature(
bytes_list=tf.train.BytesList(value=[value.encode('utf-8')]))
def _float_feature(value):
Returns float tf.train.Feature.
return tf.train.Feature(float_list=tf.train.FloatList(value=value.tolist()))
def create_embedding_example(feature_extractor, image,
projection_matrix, record_id):
Create tf.Example containing the sample's embedding and its ID.
image_features = feature_extractor(image[None, ...])
image_features_numpy = image_features.numpy().squeeze()
compressed_image_features = image_features_numpy.dot(projection_matrix)
features = {
"id": _bytes_feature(str(record_id)),
"embedding": _float_feature(compressed_image_features)
}
return tf.train.Example(features=tf.train.Features(feature=features))
def generate_random_projection_weights(original_dim=1024,
projected_dim=PROJECTED_DIM):
Generates a random projection matrix.
random_projection_matrix = np.random.randn(
projected_dim, original_dim).T
return random_projection_matrix
def create_embeddings(feature_extractor, dataset, output_path,
starting_record_id):
Creates TFRecords with embeddings of the images.
projection_matrix = generate_random_projection_weights()
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(output_path) as writer:
for image, _ in dataset:
example = create_embedding_example(feature_extractor,
image,
projection_matrix,
record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features containing embeddings for training data in
# TFRecord format.
create_embeddings(feature_extractor, train_ds, "flowers_embeddings.tfr", 0)
Explanation: We encourage you to try out other pre-trained models available in the tf.keras.applications module and also on TensorFlow Hub. We'll now write a couple of utility functions to create the sample embeddings for graph construction.
End of explanation
similarity_threshold = 0.7
graph_builder_config = nsl.configs.GraphBuilderConfig(
similarity_threshold=similarity_threshold,
lsh_splits=10, lsh_rounds=15, random_seed=12345)
nsl.tools.build_graph_from_config(["flowers_embeddings.tfr"],
"flowers_graph_70.tsv",
graph_builder_config)
Explanation: Graph building
Now that we have the sample embeddings, we will use them to build a similarity
graph, i.e, nodes in this graph will correspond to samples and edges in this
graph will correspond to similarity between pairs of nodes.
Neural Structured Learning provides a graph building library to build a graph
based on sample embeddings. It uses
cosine similarity as the
similarity measure to compare embeddings and build edges between them. It also
allows us to specify a similarity threshold, which can be used to discard
dissimilar edges from the final graph. In this example, using 0.7 as the
similarity threshold and 12345 as the random seed, we end up with a graph that
has 987,246 bi-directional edges. Here we're using the graph builder's support
for locality-sensitive hashing
(LSH) to speed up graph building. For details on using the graph builder's LSH
support, see the
build_graph_from_config
API documentation.
End of explanation
!wc -l flowers_graph_70.tsv
Explanation: Each bi-directional edge is represented by two directed edges in the output TSV
file, so that file contains 493,623 * 2 = 98,246 total lines:
End of explanation
def _bytes_feature_image(value):
Returns bytes tf.train.Feature.
return tf.train.Feature(
bytes_list=tf.train.BytesList(value=[value]))
def create_example(image, label, record_id):
Create tf.Example containing the image, label, and ID.
features = {
"id": _bytes_feature(str(record_id)),
"image": _bytes_feature_image(image.numpy()),
"label": _int64_feature(np.asarray([label])),
}
return tf.train.Example(features=tf.train.Features(feature=features))
def create_records(dataset, record_path, starting_record_id):
Generates TFRecords from a tf.data.Dataset object.
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(record_path) as writer:
for image, label in dataset:
image = tf.cast(image, tf.uint8)
image = tf.image.encode_jpeg(image, optimize_size=True,
chroma_downsampling=False)
example = create_example(image, label, record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features (images and labels) for training and validation
# data in TFRecord format.
next_record_id = create_records(train_ds,
"train_data.tfr", 0)
create_records(validation_ds, "validation_data.tfr",
next_record_id)
Explanation: Sample features
We create sample features for our problem using the tf.train.Example format
and persist them in the TFRecord format. Each sample will include the
following three features:
id: The node ID of the sample.
image: A byte list containing raw image vectors.
label: A singleton int64 identifying the target class of the image.
End of explanation
nsl.tools.pack_nbrs(
"train_data.tfr",
"",
"flowers_graph_70.tsv",
"nsl_train_data.tfr",
add_undirected_edges=True,
max_nbrs=3
)
!ls -lh *_data.tfr
Explanation: A note on create_records():
Images are serialized as byte-strings in TFRecords. tf.image.encode_jpeg() function allows us to do that in an optimal way (when optimize_size is set to True). Inputs to that function should be integers. This is why, we first cast the image to tf.uint8 and then pass it to tf.image.encode_jpeg(). To know more, refer to this tutorial.
Augment training data with graph neighbors
Since we have the sample features and the synthesized graph, we can generate the
augmented training data for Neural Structured Learning. The NSL framework
provides a library to combine the graph and the sample features to produce
the final training data for graph regularization. The resulting training data
will include original sample features as well as features of their corresponding
neighbors.
In this tutorial, we consider undirected edges and use a maximum of 3 neighbors
per sample to augment training data with graph neighbors.
End of explanation
NBR_FEATURE_PREFIX = "NL_nbr_"
NBR_WEIGHT_SUFFIX = "_weight"
Explanation: Base model
We'll first build a model without graph regularization. We'll use a simple convolutional neural network (CNN) for the purpose of this tutorial. However, this can be easily replaced with more sophisticated network architectures.
Global variables
End of explanation
class HParams(object):
Hyperparameters used for training.
def __init__(self):
### dataset parameters
self.num_classes = 5
self.num_train_examples = num_train_examples
self.num_val_examples = num_val_examples
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.3
self.num_neighbors = 2
### network architecture parameters
self.num_channels = 32
self.kernel_size = 3
### training parameters
self.train_epochs = 30
self.batch_size = 64
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
Explanation: Hyperparameters
We will use an instance of HParams to inclue various hyperparameters and
constants used for training and evaluation. We briefly describe each of them
below:
num_classes: There are 5 classes.
num_train_examples The number of training examples.
num_val_examples: The number of validation examples.
distance_type: This is the distance metric used to regularize the sample
with its neighbors.
graph_regularization_multiplier: This controls the relative weight of
the graph regularization term in the overall loss function.
num_neighbors: The number of neighbors used for graph regularization.
This value has to be less than or equal to the max_nbrs argument used
above when invoking nsl.tools.pack_nbrs.
num_channels: The number of channels to be used in the convolutional layers.
kernel_size: Kernel size to be used in the convolutional layers.
train_epochs: The number of training epochs.
batch_size: Batch size used for training and evaluation.
eval_steps: The number of batches to process before deeming evaluation
is complete. If set to None, all instances in the test set are evaluated.
End of explanation
default_jpeg_value = tf.ones((IMG_SIZE, IMG_SIZE, 3), dtype=tf.uint8)
default_jpeg_value *= 255
default_jpeg_value = tf.image.encode_jpeg(default_jpeg_value, optimize_size=True,
chroma_downsampling=False)
def make_dataset(file_path, training=False):
Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
def parse_example(example_proto):
Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth labels.
feature_spec = {
'image': tf.io.FixedLenFeature([], tf.string,
default_value=default_jpeg_value),
'label': tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above during training.
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'image')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i,
NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.FixedLenFeature([], tf.string,
default_value=default_jpeg_value)
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
labels = features.pop('label')
# We need to convert the byte-strings back to images.
features['image'] = tf.image.decode_jpeg(features['image'], channels=3)
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'image')
features[nbr_feature_key] = tf.image.decode_jpeg(features[nbr_feature_key],
channels=3)
return features, labels
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(HPARAMS.batch_size * 10)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset('nsl_train_data.tfr', True)
validation_dataset = make_dataset('validation_data.tfr')
Explanation: Prepare the data
Now, we will prepare our dataset with TensorFlow's data module (tf.data). This will include parsing the TFRecords, structuring them, batching them, and shuffling them if necessary.
In the next two cells, we first define a default value for the image examples which will come in handy when parsing the neighbor examples. Then we write a utility function to create the tf.data.Dataset objects which will be fed to our model in a moment.
End of explanation
sample = next(iter(train_dataset))
sample[0].keys()
plt.figure(figsize=(20, 20))
plt.subplot(1, 3, 1)
plt.imshow(sample[0]["NL_nbr_0_image"][0])
neighbor_one_weight = float(sample[0]["NL_nbr_0_weight"][0].numpy())
plt.title(f"Neighbor 1 with weight: {neighbor_one_weight:.3f}", fontsize=14)
plt.axis("off")
plt.subplot(1, 3, 2)
plt.imshow(sample[0]["NL_nbr_1_image"][0])
neighbor_two_weight = float(sample[0]["NL_nbr_1_weight"][0].numpy())
plt.title(f"Neighbor 2 with weight: {neighbor_two_weight:.3f}", fontsize=14)
plt.axis("off")
plt.subplot(1, 3, 3)
plt.imshow(sample[0]["image"][0])
plt.title(f"Original image with label: {int(sample[1][0])}", fontsize=14)
plt.axis("off")
plt.show()
Explanation: Visualization
In this section, we visualize the neighbor images as computed by Neural Structured Learning during building the graph.
End of explanation
def make_cnn_model():
Creates a simple CNN model.
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(
input_shape=(IMG_SIZE, IMG_SIZE, 3), name='image'),
tf.keras.layers.experimental.preprocessing.Rescaling(scale=1. / 255),
tf.keras.layers.Conv2D(
HPARAMS.num_channels, HPARAMS.kernel_size, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(
HPARAMS.num_channels, HPARAMS.kernel_size, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.GlobalAvgPool2D(),
tf.keras.layers.Dense(HPARAMS.num_classes)
])
return model
model = make_cnn_model()
model.summary()
Explanation: In the figure above, weight denotes the similarity strength of the examples.
Model training (no graph regularization)
With our datasets prepared, we are now ready to train our shallow CNN model without graph regularization.
End of explanation
model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
callbacks=[tf.keras.callbacks.EarlyStopping(patience=5)])
Explanation: After building and initializing the model in Keras, we can compile it and finally train it.
End of explanation
history_dict = history.history
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
Explanation: Plot training metrics
End of explanation
# Build a new base CNN model.
base_reg_model = make_cnn_model()
# Wrap the base model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
graph_reg_history = graph_reg_model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
callbacks=[tf.keras.callbacks.EarlyStopping(patience=5)])
Explanation: Graph regularization
We are now ready to try graph regularization using the base model that we built
above. We will use the GraphRegularization wrapper class provided by the
Neural Structured Learning framework to wrap the base (CNN) model to include
graph regularization. The rest of the steps for training and evaluating the
graph-regularized model are similar to that of the base model.
Create graph-regularized model
To assess the incremental benefit of graph regularization, we will create a new
base model instance. This is because model has already been trained for a few
iterations, and reusing this trained model to create a graph-regularized model
will not be a fair comparison for model.
End of explanation
graph_reg_history_dict = graph_reg_history.history
acc = graph_reg_history_dict['accuracy']
val_acc = graph_reg_history_dict['val_accuracy']
loss = graph_reg_history_dict['loss']
graph_loss = graph_reg_history_dict['scaled_graph_loss']
val_loss = graph_reg_history_dict['val_loss']
epochs = range(1, len(acc) + 1)
plt.clf() # clear figure
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-gD" is for solid green line with diamond markers.
plt.plot(epochs, graph_loss, '-gD', label='Training graph loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
Explanation: Plot training metrics of the graph-regularized model
End of explanation |
8,476 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Bayesian Models
Bayesian models are at the heart of many ML applications, and they can be implemented in regression or classification. For example, the "Naive Bayes" algorithm has proven to be an excellent spam detection method. Bayesian inference is often used in applications of modeling stochastic, temporal, or time-series data, such as finance, healthcare, sales, marketing, and economics. Bayesian networks are also at the heart of reinforcement learning (RL) algorithms, which drive complex automation, like autonomous vehicles. And Bayesian optimization is used to maximize the effectiveness of AI game opponents like alphaGO. Bayesian models make effective use of information, and it is possible to parameterize and update these models using prior and posterior probability functions.
There are many libraries that implement probabilistic programming including TensorFlow Probability.
In this Colab we will implement a Bayesian model using a Naive Bayes classifier to predict the likelihood of spam in a sample of text data.
Load Packages
Step2: Naive Bayes
What is Naive Bayes? There are two aspects
Step3: First let's analyze the number of spam vs. ham. For reference, "ham" is the opposite of "spam", so a non-spam message.
Step4: Here we notice a class imbalance with under 1000 spam messages out of over 5000 total messages.
Now we create a list of keywords that might indicate spam and generate features columns for each keyword.
Step5: Let's look at the correlation of features.
Step6: The heatmap shows only weak correlations between variables like 'cash', 'win', 'free', and 'urgent'. Therefore, we can assume there is independence between each keyword. In actuality, we are violating this assumption.
Train a Model to Predict Spam
Step7: Using features, we will now make predictions on whether an individual message is spam or ham.
Step8: The confusion matrix reads as follows
Step9: For email, what's more important
Step10: Student Solution | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/06_other_models/03_bayes/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
from zipfile import ZipFile
import urllib.request
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.naive_bayes import BernoulliNB
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
Explanation: Bayesian Models
Bayesian models are at the heart of many ML applications, and they can be implemented in regression or classification. For example, the "Naive Bayes" algorithm has proven to be an excellent spam detection method. Bayesian inference is often used in applications of modeling stochastic, temporal, or time-series data, such as finance, healthcare, sales, marketing, and economics. Bayesian networks are also at the heart of reinforcement learning (RL) algorithms, which drive complex automation, like autonomous vehicles. And Bayesian optimization is used to maximize the effectiveness of AI game opponents like alphaGO. Bayesian models make effective use of information, and it is possible to parameterize and update these models using prior and posterior probability functions.
There are many libraries that implement probabilistic programming including TensorFlow Probability.
In this Colab we will implement a Bayesian model using a Naive Bayes classifier to predict the likelihood of spam in a sample of text data.
Load Packages
End of explanation
def LoadZip(url, file_name, cols=['type', 'message']):
# Download file.
urllib.request.urlretrieve(url, 'spam.zip')
# Open zip in memory.
with ZipFile('spam.zip') as myzip:
with myzip.open(file_name) as myfile:
df = pd.read_csv(myfile, sep='\t', header=None)
df.columns=cols
display(df.head())
display(df.shape)
return df
url = ('https://archive.ics.uci.edu/ml/machine-learning-databases/00228/'
'smsspamcollection.zip')
df = LoadZip(url, 'SMSSpamCollection')
Explanation: Naive Bayes
What is Naive Bayes? There are two aspects: the first is naive, and the second is Bayes'. Let's first review the second part: Bayes' theorem from probability.
$$ P(x)P(y|x) = P(y)P(x|y) $$
Using this theorem, we can solve for the conditional probability of event $y$, given condition $x$. Furthermore, Bayes' rule can be extended to incorporate $n$ vectors as follows:
$$ P(y|x_1, ..., x_n) = \frac{P(y)P(x_1, ..., x_n|y)}{P(x_1, ..., x_n)}$$
These probability vectors can then be simplified by multiplying the individual conditional probability for each vector and taking the maximum likelihood. Naive Bayes returns the y value, or the category that maximizes the following argument.
$$ \hat{y} = argmax_y(P(y)\prod_{i=1}^nP(x_i|y) $$
Don't worry too much if this is a bit too much algebra. The actual implementations don't require us to remember everything!
But Wait, Why "Naive"?
In this context, "naive" assumes that there is independence between pairs of conditional vectors. In other words, it assumes the features of your model are independent (or at least, have a low multicollinearity). This is typically not the case, and it is the cause for error. Naive Bayes is practically good for classification, but not for estimation. Furthermore, it is not robust to interaction, so some of your variables may have interactions. This comes up quite frequently in natural language processing (NLP), and so the usefulness of Naive Bayes is limited to simpler applications. Sometimes simple is better, like in spam filtering where Naive Bayes can perform reasonably well with limited training data.
Spam Filtering
End of explanation
sns.countplot(df['type'])
plt.show()
Explanation: First let's analyze the number of spam vs. ham. For reference, "ham" is the opposite of "spam", so a non-spam message.
End of explanation
features = pd.DataFrame()
keywords = ['selected', 'win','deal', 'free', 'trip', 'urgent', 'require',
'need', 'cash', 'asap']
# Use regex search built into pandas.
for k in keywords:
features[k]=df['message'].str.contains(k, case=False)
Explanation: Here we notice a class imbalance with under 1000 spam messages out of over 5000 total messages.
Now we create a list of keywords that might indicate spam and generate features columns for each keyword.
End of explanation
features['allcaps'] = df['message'].str.isupper()
sns.heatmap(features.corr())
plt.show()
Explanation: Let's look at the correlation of features.
End of explanation
np.random.seed(seed=0)
X = features
y = df['type']
X_train, X_test, y_train, y_test = train_test_split(X,y)
sns.countplot(y_test)
plt.show()
Explanation: The heatmap shows only weak correlations between variables like 'cash', 'win', 'free', and 'urgent'. Therefore, we can assume there is independence between each keyword. In actuality, we are violating this assumption.
Train a Model to Predict Spam
End of explanation
def classifyNB(X_train,y_train, X_test, y_test, cols=['spam', 'ham']):
nb = BernoulliNB()
nb.fit(X_train,y_train)
y_pred = nb.predict(X_test)
class_names = cols
print('Classification Report')
print(classification_report(y_test, y_pred, target_names=class_names))
cm = confusion_matrix(y_test, y_pred, labels=class_names)
df_cm = pd.DataFrame(cm, index=class_names, columns=class_names)
sns.heatmap(df_cm, cmap='Blues', annot=True, fmt="d",
xticklabels=True, yticklabels=True, cbar=False, square=True)
plt.ylabel('Predicted')
plt.xlabel('Actual')
plt.suptitle("Confusion Matrix")
plt.show()
classifyNB(X_train,y_train,X_test,y_test)
Explanation: Using features, we will now make predictions on whether an individual message is spam or ham.
End of explanation
%%html
<a title="Walber [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons"
href="https://commons.wikimedia.org/wiki/File:Precisionrecall.svg">
<img width="256" alt="Precisionrecall"
src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/26/Precisionrecall.svg/256px-Precisionrecall.svg.png">
</a>
Explanation: The confusion matrix reads as follows:
1182 ham messages correctly predicted
114 ham messages were predicted to be spam (Type II error)
71 spam messages were correctly predicted
26 spam messages were erroneously predicted to be ham (Type I error)
Precision and Recall
Remember that precision and recall are derived from the ground truth. Review the diagram below for clarification.
End of explanation
url = ('https://archive.ics.uci.edu/ml/machine-learning-databases/'
'00331/sentiment%20labelled%20sentences.zip')
cols = ['message', 'sentiment']
folder = 'sentiment labelled sentences'
print('\nYelp')
df_yelp = LoadZip(url, folder+'/yelp_labelled.txt', cols)
print('\nAmazon')
df_amazon = LoadZip(url, folder+'/amazon_cells_labelled.txt', cols)
print('\nImdb')
df_imdb = LoadZip(url, folder+'/imdb_labelled.txt', cols)
Explanation: For email, what's more important: spam detection or ham protection?
In the case of your inbox, I don't think anyone wants to have legitimate email end up in the spam folder. On the other hand, your organization may be the target of phishing, and it may be important to filter out all spam aggressively. The answer to the question depends on the situation.
Resources
Naive Bayes docs
Spam dataset
Sentiment reviews
Paper on classifiers
Bayesian fnference
Exercises
Exercise 1
Let's load some user reviews data and do a sentiment analysis. Download the text data from this UCI ML archive.
Create a classifier using Naive Bayes for one of the three datasets in the cell below. See how it performs on the other two sets of reviews. Comment on your approach to building features and why that may or may not work well for each dataset.
End of explanation
# Your answer goes here
Explanation: Student Solution
End of explanation |
8,477 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programutveckling med Git
En introduktion
Most images in this presentation are from the Pro Git book. The entire Pro Git book, written by Scott Chacon and Ben Straub and published by Apress, is available here. All content is licensed under the Creative Commons Attribution Non Commercial Share Alike 3.0 license. Print versions of the book are available on Amazon.com.
Some images are from Git With The Program
Step1: Version Control Systems
To mention a few
Step2: Write some code and check status
Step3: Add code to stage area and check status
Step4: Transfer code to local repository
Step5: Some Git clients
SourceTree https
Step6: Build and check status
Meson is a build generating system
Ninja is a small build system with a focus on speed.
Step7: Instruct Git to ignore build artifacts and other files that shouldn't be version controlled
Add an .gitignore file
There are plenty of examples of .gitignore files on the net and/or use e.g., https
Step8: Branches, HEAD and tags?
<img src="branch-and-history.png", width=600>
Step9: Creating a New Branch
<img src="two-branches.png", width=600>
HEAD pointing to a branch
<img src="head-to-master.png", width=600>
Switching Branch
<img src="head-to-testing.png", width=600>
The HEAD branch moves forward when a commit is made
<img src="advance-testing.png", width=600>
HEAD moves when you checkout
<img src="checkout-master.png", width=600>
Branching models
Git-flow
http
Step10: CMake is used in spdlog! A commonly used build preparing tool.
Build and install spdlog
Try it out in cmake gui <img src="cmake.png" width=600>
Step11: Notice the installed .cmake files and spdlog.pc
+ .cmake - Facilitates to use this package from other CMake projects
+ spdlog.pc - Facilitates to use this package from other projects using the pkg-config utility
Introduce spdlog in TickCounter project
Use a new branch and name it "logger".
Step12: Add some code using spdlog and fix build system
Step13: Check differences in SourceTree. Build and run.
Step14: Merge logging branch into master
There are different conceps involved in a merge or could be.
- Fast forward
- Three-way merge
- Rebase
- Conflicts
<img src="merge-setup.png" width=300>
Fast forward
<img src="fastforward.png" width=300>
Three-way merge
<img src="threeway-init.png" width=500>
<img src="threeway.png" width=500>
Rebase (change the history)
<img src="rebase-init.png" width=500>
<img src="rebase.png" width=500>
<img src="rebase-fastforward.png" width=700>
<img src="rebase-shalt-not.jpg" width=800>
Merge conflict
Let's create a conflict on the last line in main.cpp
Also perform development on a another branch. Let's call it features.
Step15: Add some features on the features branch
Step16: Merge features onto master --> Conflict
Check always out the branch which shall be modified!
Step17: Resolve conlict in SourceTree
<img src="conflict-sourcetree.png" width=600>
Commit merge
Step18: Rebase
This rebase scenario will target these goals
Step19: I usually create a tmp branch at this point (If something goes wrong, just remove after)
Step20: Use git rebase interactively to squash the logging branch
<img src="squash-init.png" width=500>
Use git rebase interactively Let's perform this in SourceTree via a terminal.
Like so
Step21: After rebase of logging
<img src="rebase-logging.png" width=500>
Now master can be updated with a fast forward merge of logging
Step22: Remove logging-tmp
Step23: Reflog to your help if something has gone really wrong
Step24: Some more Git concepts
Detached HEAD - HEAD pointes to a commit (not a good state to be in)
Hunk - stage and/or discard changes of a file
Amend - fix last commit code and/or message
Stash - put changes in the stash
Blame - check who did what in a file
Cherry pick - get files from a commit
Step25: <img src="hunk.png" width=800>
GitHub
Create a repository on your GitHub account @ https
Step26: 3. Clone TickProject from GitHub to a second repository
Step27: 4. Add some code locally in a new branch 'travis' and push upstream
Step28: 5. Fetch from upstream to TickProject2
Step29: 6. At GitHub create a pull request to merge travis into master. Acknowledge.
Update TickCounter2! | Python Code:
from IPython.core.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/pOSqctHH9vY" frameborder="0" allowfullscreen></iframe>')
Explanation: Programutveckling med Git
En introduktion
Most images in this presentation are from the Pro Git book. The entire Pro Git book, written by Scott Chacon and Ben Straub and published by Apress, is available here. All content is licensed under the Creative Commons Attribution Non Commercial Share Alike 3.0 license. Print versions of the book are available on Amazon.com.
Some images are from Git With The Program: Introduction to Basic Git Concepts - Drupal Camp LA 2013. All content is licensed under the CC Attribution-ShareAlike License.
Why Git
Share information with others and/or yourself
Work with multiple things at the same time
Backup
Test stuff quick and easy
So many projects are using Git
End of explanation
%%bash
cd /tmp
rm -rf TickProject
mkdir TickProject
cd TickProject
git init
tree -aC
Explanation: Version Control Systems
To mention a few:
* Centralized Version Control Systems
- Git
- Mercurial
* Distributed Version Control Systems
- SCCS
- RCS
- CVS
- PVCS
- SourceTree
- TFS
- Perforce
Centralized | Distributed
- | -
<img src="centralized.png", width=600> | <img src="distributed.png", width=600>
Git concepts
Stream of snapshots
<img src="snapshots.png", width=600>
List of file-based changes
<img src="deltas.png", width=600>
Data Model
Blobs (atual data)
Tree (directories of blobs or of other trees)
A commit
A tree
Zero or more parent commits
Meta data (message, email, timestamp)
<img src="data-model-2.png", width=600>
SHA1
<img src="sha1.jpg", width=300>
The Three States
<img src="areas.png", width=600>
Let's get started
Install Git. Start here: https://git-scm.com
First time setup on local machine
git config --global user.name "John Doe"
git config --global user.email johndoe@example.com
git config --global core.editor "'C:/Program Files (x86)/Notepad++/notepad++.exe'
Remote git server (optional)
https://github.com/join?source=header-home
https://bitbucket.org/account/signup/
https://gitlab.com/users/sign_in
Initialize a local repository
End of explanation
%%bash
. quickedit v0.1
# git archive --format=tar --prefix=src/ --remote=~/src/maker-presentation $TAG:TickCounter | tar xf -
tree -aCI .git
git status
Explanation: Write some code and check status
End of explanation
%%bash
cd /tmp/TickProject
git add .
git status
Explanation: Add code to stage area and check status
End of explanation
%%bash
cd /tmp/TickProject
git commit -a -m "Initial revision"
git status
Explanation: Transfer code to local repository
End of explanation
!. quickedit v0.2
Explanation: Some Git clients
SourceTree https://www.sourcetreeapp.com
GitKraken https://www.gitkraken.com
TortoiseGit https://tortoisegit.org
Eclipse plugins
XCode builtin
Write some more code (change name of a variable and add a meson.build file)
Use a gui client to see status and commit code
End of explanation
%%bash
cd /tmp/TickProject && rm -rf build
meson src build
%%bash
ninja -C /tmp/TickProject/build
%%bash
cd /tmp/TickProject
git status
Explanation: Build and check status
Meson is a build generating system
Ninja is a small build system with a focus on speed.
End of explanation
%%bash
cd /tmp/TickProject
git status
Explanation: Instruct Git to ignore build artifacts and other files that shouldn't be version controlled
Add an .gitignore file
There are plenty of examples of .gitignore files on the net and/or use e.g., https://www.gitignore.io
End of explanation
%%bash
cd /tmp/TickProject
git branch -v
Explanation: Branches, HEAD and tags?
<img src="branch-and-history.png", width=600>
End of explanation
%%bash
cd /tmp
rm -rf spdlog
rm -rf 3rd/spdlog
git clone https://github.com/gabime/spdlog.git spdlog
%%bash
cd /tmp/spdlog
ls -la
Explanation: Creating a New Branch
<img src="two-branches.png", width=600>
HEAD pointing to a branch
<img src="head-to-master.png", width=600>
Switching Branch
<img src="head-to-testing.png", width=600>
The HEAD branch moves forward when a commit is made
<img src="advance-testing.png", width=600>
HEAD moves when you checkout
<img src="checkout-master.png", width=600>
Branching models
Git-flow
http://nvie.com/posts/a-successful-git-branching-model/
<img src="Git-branching-model.pdf", width=600>
Cactus
https://barro.github.io/2016/02/a-succesful-git-branching-model-considered-harmful/
<img src="cactus-model-200.png", width=200>
BBC News
http://www.integralist.co.uk/posts/github-workflow.html
Git Pro book
https://git-scm.com/book/en/v2/Git-Branching-Branching-Workflows
Third party package usage strategies
Package Manager
Homebrew
Nuget
apt-get
Git submodules / subtrees
Qt
<img src="submodules.jpg" width=300>
In source
Separate repositories (pkg-config)
Add better logging support to our TickCounter project
Let's use the 3rd party spdlog package as a separate repository and try it out in a logging feature branch
https://github.com/gabime/spdlog.git
Get the spdlog repo
End of explanation
%%bash
cd /tmp/spdlog-build
ls -l
%%bash
cd /tmp/spdlog-build
xcodebuild -configuration Release -target install
Explanation: CMake is used in spdlog! A commonly used build preparing tool.
Build and install spdlog
Try it out in cmake gui <img src="cmake.png" width=600>
End of explanation
%%bash
cd /tmp/TickProject
git checkout -b logging
# git branch logging
# git checkout logging
Explanation: Notice the installed .cmake files and spdlog.pc
+ .cmake - Facilitates to use this package from other CMake projects
+ spdlog.pc - Facilitates to use this package from other projects using the pkg-config utility
Introduce spdlog in TickCounter project
Use a new branch and name it "logger".
End of explanation
%%bash
. quickci Log4 Log5 Log6
git log
Explanation: Add some code using spdlog and fix build system
End of explanation
%%bash
cd /tmp/TickProject
ninja -C build
Explanation: Check differences in SourceTree. Build and run.
End of explanation
%%bash
cd /tmp/TickProject/src
git checkout master
git branch features
%%bash
cd /tmp/TickProject/src
echo This row at the end will prevent TickProject from building >> main.cpp
git diff
git commit -am "This will be a conflict"
Explanation: Merge logging branch into master
There are different conceps involved in a merge or could be.
- Fast forward
- Three-way merge
- Rebase
- Conflicts
<img src="merge-setup.png" width=300>
Fast forward
<img src="fastforward.png" width=300>
Three-way merge
<img src="threeway-init.png" width=500>
<img src="threeway.png" width=500>
Rebase (change the history)
<img src="rebase-init.png" width=500>
<img src="rebase.png" width=500>
<img src="rebase-fastforward.png" width=700>
<img src="rebase-shalt-not.jpg" width=800>
Merge conflict
Let's create a conflict on the last line in main.cpp
Also perform development on a another branch. Let's call it features.
End of explanation
%%bash
cd /tmp/TickProject/src
git checkout features
sed -i 's/Hello World/Hello Makers/g' main.cpp
git diff
git commit -am "Changed greeting"
%%bash
cd /tmp/TickProject/src
echo // This row at the end will compile >> main.cpp
git diff
git commit -am "Added a row at the end of main.cpp"
Explanation: Add some features on the features branch
End of explanation
%%bash
cd /tmp/TickProject/src
git checkout master
git merge features
Explanation: Merge features onto master --> Conflict
Check always out the branch which shall be modified!
End of explanation
%%bash
cd /tmp/TickProject/src
git status
%%bash
cd /tmp/TickProject/src
git commit -am "Features added"
Explanation: Resolve conlict in SourceTree
<img src="conflict-sourcetree.png" width=600>
Commit merge
End of explanation
%%bash
cd /tmp/TickProject
git checkout logging
Explanation: Rebase
This rebase scenario will target these goals:
* The logging branch (spdlog) is merged into the master branch
* An appropriate commit message (i.e, not 'Fixed Log6')
* Only one commit from logging (squash several commits to one)
End of explanation
%%bash
cd /tmp/TickProject
git branch logging-tmp
Explanation: I usually create a tmp branch at this point (If something goes wrong, just remove after)
End of explanation
%%bash
cd /tmp/TickProject
git checkout logging
git rebase master
Explanation: Use git rebase interactively to squash the logging branch
<img src="squash-init.png" width=500>
Use git rebase interactively Let's perform this in SourceTree via a terminal.
Like so: git rebase -i 06558f1d77a09bb41a97bf3eda20e1af3f551a39
<img src="squash.png" width=500>
After squash
<img src="squash-after.png" width=500>
End of explanation
%%bash
cd /tmp/TickProject
git checkout master
git merge logging
Explanation: After rebase of logging
<img src="rebase-logging.png" width=500>
Now master can be updated with a fast forward merge of logging
End of explanation
%%bash
cd /tmp/TickProject
git branch -d logging-tmp
%%bash
cd /tmp/TickProject
git branch -D logging-tmp
Explanation: Remove logging-tmp
End of explanation
%%bash
cd /tmp/TickProject
git reflog
Explanation: Reflog to your help if something has gone really wrong
End of explanation
!. quickedit v0.2
Explanation: Some more Git concepts
Detached HEAD - HEAD pointes to a commit (not a good state to be in)
Hunk - stage and/or discard changes of a file
Amend - fix last commit code and/or message
Stash - put changes in the stash
Blame - check who did what in a file
Cherry pick - get files from a commit
End of explanation
%%bash
cd /tmp/TickProject
git checkout master
git remote add origin https://github.com/topcatse/TickProject.git
git push --set-upstream origin master
Explanation: <img src="hunk.png" width=800>
GitHub
Create a repository on your GitHub account @ https://github.com
Bind your local account to the GitHub upstream repository
Clone upstream repository to a second repository
Add some code on a new branch travis and push upstream
On secondary repository, fetch from upstream and merge or do a pull
Create a pull request
2. Bind
End of explanation
%%bash
cd /tmp
git clone https://github.com/topcatse/TickProject.git TickProject2
cd TickProject2
tree -aCI .git
%%bash
cd /tmp
ls -l
Explanation: 3. Clone TickProject from GitHub to a second repository
End of explanation
%%bash
cd /tmp/TickProject
git checkout -b travis
git archive --format=tar --remote=~/src/maker-presentation v0.2 .travis.yml | tar xf -
git add .travis.yml
git commit -am "Added travis-ci build"
git push -u origin travis
Explanation: 4. Add some code locally in a new branch 'travis' and push upstream
End of explanation
%%bash
cd /tmp/TickProject2
git fetch origin
git branch -a
%%bash
cd /tmp/TickProject2
# git checkout -b travis --track origin/travis
git checkout travis
Explanation: 5. Fetch from upstream to TickProject2
End of explanation
%%bash
cd /tmp/TickProject2
git fetch origin
git branch -a
%%bash
cd /tmp/TickProject2
git checkout master
git log master..origin/master
%%bash
cd /tmp/TickProject2
git merge origin/master
git status
Explanation: 6. At GitHub create a pull request to merge travis into master. Acknowledge.
Update TickCounter2!
End of explanation |
8,478 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing the NYC Subway Dataset
Intro to Data Science
Step1: Class for Creating Training and Testing Samples
Step2: Section 2. Linear Regression
<h3 id='2_1'>2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model?</h3>
Notes
<sup>1</sup> The linear correlation coefficient ($r$) can take on the following values
Step3: Only categorical
Step4: <h3 id='2_4'>2.4 What are the coefficients (or weights) of the features in your linear regression model?</h3>
Step5: <h3 id='2_5'>2.5 What is your model’s $R^{2}$ (coefficient of determination) value?</h3>
For $n = 500$, the best $R^{2}$ value witnessed was $0.85$ (with the best $r$ value seen at $0.92$).
<h3 id='2_6_a'>2.6.a What does this $R^{2}$ value mean for the goodness of fit for your regression model?</h3>
This $R^{2}$ value means that $85\%$ of the proportion of total variation in the response variable is explained by the least-squares regression line (i.e., model) that was created above.
<h3 id='2_6_b'>2.6.b Do you think this linear model to predict ridership is appropriate for this dataset, given this $R^{2}$ value?</h3>
It's better than guessing in the dark, but too much shouldn't be staked on its predictions
Step6: As can be seen from the above, somewhat arbitrarily-selected, values, the number of close predictions is a little over $50\%$ when close is defined as a prediction with a difference that is less than $1$ from the actual observed value. Given that the value of entries can take on such a large range of values $[0, 32814]$, differences less than $100$ and $1000$ are shown as well.
Residual Analysis
Step7: Since the above predictions show a discernible, linear, and increasing pattern (and, thus, are not stochastic), it seems apparent that there is in fact not a linear relationship between the explanatory and response variables. Thus, a linear model is not appropriate for the current data set.
Step8: Gradient Descent
Step9: regression makes little sense here
negative values are complete garbage
fractional values mean nothing
moreover, humans tend to care about ranges when it comes to numbers like these, not exact values | Python Code:
import numpy as np
import pandas as pd
import scipy as sp
import scipy.stats as st
import statsmodels.api as sm
import scipy.optimize as op
import matplotlib.pyplot as plt
%matplotlib inline
filename = '/Users/excalibur/py/nanodegree/intro_ds/final_project/improved-dataset/turnstile_weather_v2.csv'
# import data
data = pd.read_csv(filename)
print data.columns.values
data['ENTRIESn_hourly'].describe()
plt.boxplot(data['ENTRIESn_hourly'], vert=False)
plt.show()
data[data['ENTRIESn_hourly'] == 0].count()[0]
data[data['ENTRIESn_hourly'] > 500].count()[0]
data[data['ENTRIESn_hourly'] > 1000].count()[0]
data[data['ENTRIESn_hourly'] > 5000].count()[0]
data[data['ENTRIESn_hourly'] > 10000].count()[0]
plt.figure(figsize = (10,10))
plt.hist(data['ENTRIESn_hourly'], bins=100)
plt.show()
plt.boxplot(data['ENTRIESn_hourly'], vert=False)
plt.show()
# the overwhelming majority of the action is occurring below 10000
#data = data[(data['ENTRIESn_hourly'] <= 10000)]
plt.figure(figsize = (10,10))
plt.hist(data['ENTRIESn_hourly'].values, bins=100)
plt.show()
plt.boxplot(data['ENTRIESn_hourly'].values, vert=False)
plt.show()
Explanation: Analyzing the NYC Subway Dataset
Intro to Data Science: Final Project 1, Part 2
(Short Questions)
Section 2. Linear Regression
Austin J. Alexander
Import Directives and Initial DataFrame Creation
End of explanation
class SampleCreator:
def __init__(self,data,categorical_features,quantitative_features):
m = data.shape[0]
random_indices = np.random.choice(np.arange(0,m), size=m, replace=False)
train_indices = random_indices[0:(m-(m*0.10))] # leave about 10% of data for testing
test_indices = random_indices[(m-(m*0.10)):]
# check disjointedness of training and testing indices
for i in train_indices:
if i in test_indices:
print "<!> Training and Testing Sample Overlap <!>"
# response vector
y = data['ENTRIESn_hourly'].values
# get quantitative features
X = data[quantitative_features].values
# Feature Scaling
# mean normalization
x_i_bar = []
s_i = []
for i in np.arange(X.shape[1]):
x_i_bar.append(np.mean(X[:,i]))
s_i.append(np.std(X[:,i]))
X[:,i] = np.true_divide((np.subtract(X[:,i],x_i_bar[i])),s_i[i])
# create dummy variables for categorical features
for feature in categorical_features:
dummies = sm.categorical(data[feature].values, drop=True)
X = np.hstack((X,dummies))
# final design matrix
X = sm.add_constant(X)
# training samples
self.y_train = y[train_indices]
self.X_train = X[train_indices]
# testing samples
self.y_test = y[test_indices]
self.X_test = X[test_indices]
Explanation: Class for Creating Training and Testing Samples
End of explanation
#categorical_features = ['UNIT', 'hour', 'day_week', 'station']
categorical_features = ['UNIT']
#quantitative_features = ['latitude', 'longitude', 'rain']
quantitative_features = []
# for tracking during trials
best_rsquared = 0
best_results = []
# perform 5 trials; keep model with best R^2
for x in xrange(0,5):
samples = SampleCreator(data,categorical_features,quantitative_features)
model = sm.OLS(samples.y_train,samples.X_train)
results = model.fit()
if results.rsquared > best_rsquared:
best_rsquared = results.rsquared
best_results = results
print "r = {0:.2f}".format(np.sqrt(best_results.rsquared))
print "R^2 = {0:.2f}".format(best_results.rsquared)
Explanation: Section 2. Linear Regression
<h3 id='2_1'>2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model?</h3>
Notes
<sup>1</sup> The linear correlation coefficient ($r$) can take on the following values: $-1 \leq r \leq 1$. If $r = +1$, then a perfect positive linear relation exists between the explanatory and response variables. If $r = -1$, then a perfect negative linear relation exists between the explanatory and response variables.
<sup>2</sup> The coefficient of determination ($R^{2}$) can take on the following values: $0 \leq R^{2} \leq 1$. If $R^{2} = 0$, the least-squares regression line has no explanatory value; if $R^{2} = 1$, the least-squares regression line explains $100\%$ of the variation in the response variable.
<h3 id='2_2'>2.2 What features (input variables) did you use in your model? Did you use any dummy variables as part of your features?</h3>
Quantitative features used: 'hour','day_week','rain','tempi'.
Categorical features used: 'UNIT'. As a categorical feature, this variable required the use of so-called dummy variables.
<h3 id='2_3'>2.3 Why did you select these features in your model?</h3>
Due to the findings presented in the <a href='IntroDS-ProjectOne-DataExploration-Supplement.ipynb' target='_blank'>DataExploration supplement</a>, it seemed clear that location significantly impacted the number of entries. In addition, the hour and day of the week showed importance. Temperature appeared to have some relationship with entries as well, and so it was included. Based on that exploration and on the statistical and practical evidence offered in <a href='IntroDS-ProjectOne-Section1.ipynb' target='_blank'>Section 1. Statistical Test</a>, rain was not included as a feature (and, as evidenced by a number of test runs, had marginal if any importance).
As far as the selection of location and day/time variables were concerned, station can be captured quantitatively by latitude and longitude, both of which, as numeric values, should offer a better sense of trend toward something. However, as witnessed by numerous test runs, latitude and longitude in fact appear to be redundant when using UNIT as a feature, which is in fact more signficant (as test runs indicated and, as one might assume, due to, for example, station layouts) than latitude and longitude.
Each DATEn is a 'one-off', so it's unclear how any could be helpful for modeling/predicting (as those dates literally never occur again). day_week seemed to be a better selection in this case.
Using StatsModels OLS to Create a Model
End of explanation
X_train = samples.X_train
print X_train.shape
y_train = samples.y_train
print y_train.shape
y_train.shape = (y_train.shape[0],1)
print y_train.shape
X_test = samples.X_test
print X_test.shape
y_test = samples.y_test
print y_test.shape
y_test.shape = (y_test.shape[0],1)
print y_test.shape
ols_y_hat = results.predict(X_test)
ols_y_hat.shape = (ols_y_hat.shape[0],1)
plt.title('Observed Values vs Fitted Predictions')
plt.xlabel('observed values')
plt.ylabel('predictions')
plt.scatter(y_test, ols_y_hat, alpha=0.7, color='green', edgecolors='black')
plt.show()
Explanation: Only categorical: day_week r = 0.24, hour r = 0.41, station r = 0.58, unit = 0.61 (adding conds does nothing: r = 0.74)
without rain: r = 0.74, with rain: r = 0.74
only quantitative: rain r = 0.74, which meant i didn't have to bother checking lat and long
no quantitative: 0.74
unit makes intuitive sense
as expected, inlcuding 'fog', 'precipi','pressurei', 'tempi',
'wspdi', 'meanprecipi', 'meanpressurei', 'meantempi', 'meanwspdi',
'weather_lat', 'weather_lon' did thing (r = 0.74)
Get Training and Testing Values
End of explanation
print best_results.params
Explanation: <h3 id='2_4'>2.4 What are the coefficients (or weights) of the features in your linear regression model?</h3>
End of explanation
ols_residuals = (ols_y_hat - y_test)
ols_residuals.shape
Explanation: <h3 id='2_5'>2.5 What is your model’s $R^{2}$ (coefficient of determination) value?</h3>
For $n = 500$, the best $R^{2}$ value witnessed was $0.85$ (with the best $r$ value seen at $0.92$).
<h3 id='2_6_a'>2.6.a What does this $R^{2}$ value mean for the goodness of fit for your regression model?</h3>
This $R^{2}$ value means that $85\%$ of the proportion of total variation in the response variable is explained by the least-squares regression line (i.e., model) that was created above.
<h3 id='2_6_b'>2.6.b Do you think this linear model to predict ridership is appropriate for this dataset, given this $R^{2}$ value?</h3>
It's better than guessing in the dark, but too much shouldn't be staked on its predictions:
Predictions and their Residual Differences from Observed Values
End of explanation
plt.boxplot(ols_residuals, vert=False)
plt.title('Boxplot of Residuals')
plt.xlabel('residuals')
plt.show()
plt.scatter(ols_y_hat,ols_residuals, alpha=0.7, color='purple', edgecolors='black')
plt.title('RESIDUAL PLOT')
plt.plot([np.min(ols_y_hat),np.max(ols_y_hat)], [0, 0], color='red')
plt.xlabel('predictions')
plt.ylabel('residuals')
plt.show()
plt.hist(y_test, color='purple', alpha=0.7, label='observations')
plt.hist(ols_y_hat, color='green', alpha=0.5, bins=6, label='ols predictions')
plt.title('OBSERVATIONS vs OLS PREDICTIONS')
plt.ylabel('frequency')
plt.legend()
plt.show()
plt.hist(ols_residuals, color='gray', alpha=0.7)
plt.title('OLS RESIDUALS')
plt.ylabel('frequency')
plt.show()
Explanation: As can be seen from the above, somewhat arbitrarily-selected, values, the number of close predictions is a little over $50\%$ when close is defined as a prediction with a difference that is less than $1$ from the actual observed value. Given that the value of entries can take on such a large range of values $[0, 32814]$, differences less than $100$ and $1000$ are shown as well.
Residual Analysis
End of explanation
best_results.summary()
Explanation: Since the above predictions show a discernible, linear, and increasing pattern (and, thus, are not stochastic), it seems apparent that there is in fact not a linear relationship between the explanatory and response variables. Thus, a linear model is not appropriate for the current data set.
End of explanation
#gradient descent, number of iterations
#iterations = 100
iterations = 300
# learning rates
#alpha = [-0.3, -0.1, -0.03, -0.01, -0.003, -0.001, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3]
#alpha = [0.001] # before removing large values
alpha = [0.01]
# number of examples
m = X_train.shape[0]
print "m = {0}".format(m)
# number of features
n = X_train.shape[1]
print "n = {0}".format(n)
# theta parameters
theta = np.zeros(((n,1)))
# vectorized cost function
def J(X,y):
m = X.shape[0]
return (1.0/(2*m)) * (((X.dot(theta)) - y).T).dot(X.dot(theta) - y)
# vectorized delta function
def delta(X,y):
return X.T.dot((X.dot(theta)) - y)
# gradient descent, test multiple alphas
for a in np.arange(0,len(alpha)):
# reset theta
theta = np.zeros(((n),1))
# reset vector J_values, store cost function values for plotting
J_values = np.zeros((iterations,1))
# minibatch process
for i in np.arange(0,m,100):
for iteration in xrange(0,iterations):
X = X_train[i:i+100]
y = y_train[i:i+100]
theta = theta - (alpha[a] * delta(X,y))
J_values[iteration] = J(X,y)
# visualize the cost function (2-D)
cost_x = np.arange(iterations)
cost_x.shape = (iterations,1)
plt.plot(cost_x,J_values)
plt.title("Learning Rate: " + str(alpha[a]))
plt.xlabel('iterations')
plt.ylabel(r"$J(\theta)$")
plt.show()
print "Parameters:\n{0}\n...".format(theta[0:5])
grad_desc_y_hat = X_test.dot(theta)
print grad_desc_y_hat.shape
plt.title('Observed Values vs Fitted Predictions')
plt.xlabel('observed values')
plt.ylabel('predictions')
plt.scatter(y_test, grad_desc_y_hat, alpha=0.7, color='green', edgecolors='black')
plt.show()
gd_residuals = (grad_desc_y_hat - y_test)
gd_residuals.shape
plt.boxplot(gd_residuals, vert=False)
plt.title('Boxplot of Residuals')
plt.xlabel('residuals')
plt.show()
plt.scatter(grad_desc_y_hat,gd_residuals, alpha=0.7, color='purple', edgecolors='black')
plt.title('RESIDUAL PLOT')
plt.plot([np.min(grad_desc_y_hat),np.max(grad_desc_y_hat)], [0, 0], color='red')
plt.xlabel('predictions')
plt.ylabel('residuals')
plt.show()
gd_rounded_yhat = np.round(grad_desc_y_hat)
for i in np.arange(y_test.shape[0]):
if gd_rounded_yhat[i] == y_test[i]:
print gd_rounded_yhat[i]
plt.hist(y_test, color='purple', alpha=0.7, label='observations')
plt.hist(grad_desc_y_hat, color='green', alpha=0.5, label='gd predictions')
plt.title('OBSERVATIONS vs GD PREDICTIONS')
plt.ylabel('frequency')
plt.show()
plt.hist(gd_residuals, color='gray', alpha=0.7)
plt.title('GD RESIDUALS')
plt.ylabel('frequency')
plt.show()
def grad_desc_score():
within_50 = 0
for i in range(len(y_test)):
if np.absolute(y_test[i] - grad_desc_y_hat[i]) <= 1000:
within_50 += 1
return within_50*1.0/len(y_test)
grad_desc_score()
X_test[0:5]
Explanation: Gradient Descent
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators = 100, n_jobs=-1) # 10, 30, 100, 300
clf = clf.fit(X_train,np.ravel(y_train))
units = data['UNIT'].values
entries = data['ENTRIESn_hourly'].values
units = np.array([int(u.replace('R','')) for u in units])
units.shape = (units.shape[0],1)
clf = RandomForestClassifier(n_estimators = 10, n_jobs=-1) # 10, 30, 100, 300
clf = clf.fit(units,entries)
pred = clf.predict(units)
clf.score(units,entries)
Explanation: regression makes little sense here
negative values are complete garbage
fractional values mean nothing
moreover, humans tend to care about ranges when it comes to numbers like these, not exact values
End of explanation |
8,479 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
UTC
Coordinated Universal Time / Temps Universel Coordonné
Also called Greenwich Mean Time (GMT)
Time zones vs. Offsets
UTC-6 is an offset
US/Central is a time zone
CST is a highly-context-dependent abbreviation
Step1: Also Samoa on January 29, 2011.
Double days
Kwajalein Atoll, 1969
Step2: Why do we need to work with time zones at all?
Step3: Python's Time Zone Model
tzinfo
Time zones are provided by subclassing tzinfo.
Information provided is a function of the datetime
Step4: Ambiguous times
Ambiguous times are times where the same "wall time" occurs twice, such as during a DST to STD transition.
Step5: PEP-495
Step6: Note
Step7: Same Zone
Step8: Comparing timezone-aware datetimes
Different zones
Step9: If either datetime is ambiguous, the result is always False
Step10: A curious case...
Step11: Imaginary Times
Imaginary times are wall times that don't exist in a given time zone, such as during an STD to DST transition.
Step12: Why it was non-transitive
Step13: Working with time zones
dateutil
In dateutil's suite of tzinfo objects, you can attach time zones in the constructor if you have a wall time
Step14: If you have a naive wall time, or a wall time in another zone that you want to translate without shifting the offset, use datetime.replace
Step15: If you have an absolute time, in UTC or otherwise, use datetime.astimezone()
Step16: pytz
In pytz, datetime.astimezone() still works exactly as expected
Step17: But the constructor or .replace methods fail horribly
Step18: pytz's time zone model
tzinfos are all static offsets
tzinfo is attached by the time zone object itself
Step19: You must normalize() datetimes after you've done some arithmetic on them
Step20: Handling ambiguous times
Overview
Both dateutil and pytz will automatically give you the right absolute time if converting from an absolute time.
Step21: dateutil
For backwards compatibility, dateutil provides a tz.enfold method to add a fold attribute if necessary
Step22: ```python
Python 2.7.12
Type "help", "copyright", "credits" or "license" for more information.
from datetime import datetime
from dateutil import tz
dt = datetime(2004, 10, 31, 1, 30, tzinfo=tz.gettz('US/Eastern'))
tz.enfold(dt)
_DatetimeWithFold(2004, 10, 31, 1, 30, tzinfo=tzfile('/usr/share/zoneinfo/US/Eastern'))
tz.enfold(dt).tzname()
'EST'
dt.tzname()
'EDT'
```
dateutil
To detect ambiguous times, dateutil provides tz.datetime_ambiguous
Step23: Note
Step24: pytz
When localizing times, pytz defaults to standard time
Step25: To get a time zone in daylight time, pass is_dst=True to localize
Step26: If is_dst=None is passed to localize, pytz raises an AmbiguousTimeError
Step27: Handling imaginary times
dateutil
dateutil provides a tz.datetime_exists() function to tell you whether you've constructed an imaginary datetime
Step28: Generally for imaginary datetimes, you either want to skip over them or "slide forward"
Step29: pytz
When using localize on an imaginary datetime, pytz will create an imaginary time and use is_dst to decide what offset to assign it
Step30: If you have a non-existent date, normalize() will slide it forward or backwards, depending on the value passed to is_dst (default is False)
Step31: pytz
If you pass is_dst=None, pytz will throw a NonExistentTimeError
Step32: pytz
dateutil.tz.datetime_exists() works with pytz zones, too
Step33: And will detect non-normalized datetimes
Step34: dateutil's tzinfo implementations
UTC and Static time zones
Step35: Static offsets represent zones with a fixed offset from UTC, and takes a tzname or either number of seconds or a timedelta
Step36: UTC and Static time zones
In Python 3.2, timezone objects were introduced to provide ready-made tzinfo subclasses for the simple case of static offsets from UTC.
Step37: Local time
The tz.tzlocal() class is a tzinfo implementation that uses the OS hooks in Python's time module to get the local system time.
Step38: Local time
Step39: The IANA database contains historical time zone transitions
Step40: tz.gettz()
The most general way to get a time zone is to pass the relevant timezone string to the gettz() function, which will try parsing it a number of different ways until it finds a relevant string. | Python Code:
dt_before = datetime(1995, 1, 1, 23, 59, tzinfo=tz.gettz('Pacific/Kiritimati'))
dt_after = add_absolute(dt_before, timedelta(minutes=2))
print(dt_before)
print(dt_after)
Explanation: Introduction
UTC
Coordinated Universal Time / Temps Universel Coordonné
Also called Greenwich Mean Time (GMT)
Time zones vs. Offsets
UTC-6 is an offset
US/Central is a time zone
CST is a highly-context-dependent abbreviation:
Central Standard Time (UTC-6)
Cuba Standard Time (UTC-5)
China Standard Time (UTC+8)
Complicated time zones
Non-integer offsets
Examples:
- Australia/Adelaide (+09:30)
- Asia/Kathmandu (+05:45)
- Africa/Monrovia (+00:44:30) (Before 1979)
Change of DST status without offset change
Portugal, 1992
WET (+0 STD) -> WEST (+1 DST) 1992-03-29
WEST (+1 DST) -> CET (+1 STD) 1992-09-27
Portugal, 1996
CET (+1 STD) -> WEST (+1 DST) 1996-03-31
WEST (+1 DST) -> WET (+0 STD) 1996-10-27
Complicated time zones
Zone name change without offset change
Aleutian Islands, 1983:
BST (-11 STD) -> BDT (-10 DST), 1983-04-24
BDT (-10 DST) -> AHST (-10 STD), 1983-10-30
AHST (-10 STD) -> HST (-10 STD), 1983-11-30 (Zone renamed)
More than one DST transition per year
Morroco, 2012
WET (+0 STD) -> WEST (+1 DST) 2012-04-29
WEST (+1 DST) -> WET (+0 STD) 2012-07-20
WET (+0 STD) -> WEST (+1 DST) 2012-08-20
WEST (+1 DST) -> WET (+0 STD) 2012-09-30
... and Morocco in 2013-present, and Egypt in 2010 and 2014, and Palestine in 2011.
Complicated time zones
Missing days
Christmas Island (Kiritimati), January 2, 1995 (UTC-10 -> UTC+14)
End of explanation
dt_before = datetime(1969, 9, 30, 11, 59, tzinfo=tz.gettz('Pacific/Kwajalein'))
dt_after = add_absolute(dt_before, timedelta(minutes=2))
print(dt_before)
print(dt_after)
Explanation: Also Samoa on January 29, 2011.
Double days
Kwajalein Atoll, 1969
End of explanation
from dateutil import rrule as rr
# Close of business in New York on weekdays
closing_times = rr.rrule(freq=rr.DAILY, byweekday=(rr.MO, rr.TU, rr.WE, rr.TH, rr.FR),
byhour=17, dtstart=datetime(2017, 3, 9, 17), count=5)
for dt in closing_times:
print(dt.replace(tzinfo=NYC))
for dt in closing_times:
print(dt.replace(tzinfo=NYC).astimezone(UTC))
Explanation: Why do we need to work with time zones at all?
End of explanation
class ET(tzinfo):
def utcoffset(self, dt):
if self.isdaylight(dt):
return timedelta(hours=-4)
else:
return timedelta(hours=-5)
def dst(self, dt):
if self.isdaylight(dt):
return timedelta(hours=1)
else:
return timedelta(hours=0)
def tzname(self, dt):
return "EDT" if self.isdaylight(dt) else "EST"
def isdaylight(self, dt):
dst_start = datetime(dt.year, 1, 1) + rd.relativedelta(month=3, weekday=rd.SU(+2), hour=2)
dst_end = datetime(dt.year, 1, 1) + rd.relativedelta(month=11, weekday=rd.SU, hour=2)
return dst_start <= dt.replace(tzinfo=None) < dst_end
print(datetime(2017, 11, 4, 12, 0, tzinfo=ET()))
print(datetime(2017, 11, 5, 12, 0, tzinfo=ET()))
dt_before_utc = datetime(2017, 11, 5, 0, 30, tzinfo=ET()).astimezone(tz.tzutc())
dt_during = (dt_before_utc + timedelta(hours=1)).astimezone(ET()) # 1:30 EST
dt_after = (dt_before_utc + timedelta(hours=2)).astimezone(ET()) # 1:30 EDT
print(dt_during) # Lookin good!
print(dt_after) # OH NO!
Explanation: Python's Time Zone Model
tzinfo
Time zones are provided by subclassing tzinfo.
Information provided is a function of the datetime:
tzname: The (usually abbreviated) name of the time zone at the given datetime
utcoffset: The offset from UTC at the given datetime
dst: The size of the datetime's DST offset (usually 0 or 1 hour)
An example tzinfo implementation
End of explanation
dt1 = datetime(2004, 10, 31, 4, 30, tzinfo=UTC)
for i in range(4):
dt = (dt1 + timedelta(hours=i)).astimezone(NYC)
print('{} | {} | {}'.format(dt, dt.tzname(),
'Ambiguous' if tz.datetime_ambiguous(dt) else 'Unambiguous'))
Explanation: Ambiguous times
Ambiguous times are times where the same "wall time" occurs twice, such as during a DST to STD transition.
End of explanation
print_tzinfo(datetime(2004, 10, 31, 1, 30, tzinfo=NYC)) # fold=0
print_tzinfo(datetime(2004, 10, 31, 1, 30, fold=1, tzinfo=NYC))
Explanation: PEP-495: Local Time Disambiguation
First introduced in Python 3.6
Introduces the fold attribute of datetime
Changes to aware datetime comparison around ambiguous times
Whether you are on the fold side is a property of the datetime:
End of explanation
dt1 = datetime(2004, 10, 30, 12, 0); dt1a = datetime(2004, 10, 31, 1, 30)
dt2 = datetime(2004, 10, 30, 12, 0); dt2a = datetime(2004, 10, 31, 1, 30)
dt3 = datetime(2004, 10, 30, 11, 0); dt3a = datetime(2004, 10, 31, 2, 30) # Unambiguous
Explanation: Note: fold=1 represents the second instance of an ambiguous datetime
Comparing timezone-aware datetimes
End of explanation
print_dt_eq(dt1.replace(tzinfo=NYC), dt2.replace(tzinfo=NYC)) # Unambiguous
print_dt_eq(dt1.replace(tzinfo=NYC), dt3.replace(tzinfo=NYC))
print_dt_eq(dt1a.replace(tzinfo=NYC), dt2a.replace(tzinfo=NYC)) # Ambiguous
print_dt_eq(dt1a.replace(tzinfo=NYC), dt2a.replace(fold=1, tzinfo=NYC), bold=True)
print_dt_eq(dt1a.replace(tzinfo=NYC), dt3a.replace(tzinfo=NYC))
Explanation: Same Zone: Wall clock times are used, offset ignored
End of explanation
print_dt_eq(dt1.replace(tzinfo=NYC), dt2.replace(tzinfo=CHI)) # Unambiguous
print_dt_eq(dt1.replace(tzinfo=NYC), dt3.replace(tzinfo=CHI))
Explanation: Comparing timezone-aware datetimes
Different zones: If both datetimes are unambiguous, the absolute times are compared:
End of explanation
print_dt_eq(dt1a.replace(fold=1, tzinfo=NYC), dt3a.replace(tzinfo=CHI), bold=True)
Explanation: If either datetime is ambiguous, the result is always False:
End of explanation
LON = gettz('Europe/London')
x = datetime(2007, 3, 25, 1, 0, tzinfo=LON)
ts = x.timestamp()
y = datetime.fromtimestamp(ts, LON)
z = datetime.fromtimestamp(ts, gettz('Europe/London'))
x == y
x == z
y == z
Explanation: A curious case...
End of explanation
dt1 = datetime(2004, 4, 4, 6, 30, tzinfo=UTC)
for i in range(3):
dt = (dt1 + timedelta(hours=i)).astimezone(NYC)
print('{} | {} '.format(dt, dt.tzname()))
print(datetime(2007, 3, 25, 1, 0, tzinfo=LON))
print(datetime(2007, 3, 25, 0, 0, tzinfo=UTC).astimezone(LON))
print(datetime(2007, 3, 25, 1, 0, tzinfo=UTC).astimezone(LON))
Explanation: Imaginary Times
Imaginary times are wall times that don't exist in a given time zone, such as during an STD to DST transition.
End of explanation
LON = gettz('Europe/London')
x = datetime(2007, 3, 25, 1, 0, tzinfo=LON)
ts = x.timestamp()
y = datetime.fromtimestamp(ts, LON)
z = datetime.fromtimestamp(ts, gettz('Europe/London'))
print('x (LON): {}'.format(x))
print('x (UTC): {}'.format(x.astimezone(UTC)))
print('x (LON->UTC->LON): {}'.format(x.astimezone(UTC).astimezone(LON)))
print('y: {}'.format(y))
print('z: {}'.format(z))
print('x: {}'.format(x))
print('y: {}'.format(y))
print('z: {}'.format(z))
x.tzinfo is y.tzinfo
x.tzinfo is z.tzinfo
Explanation: Why it was non-transitive
End of explanation
dt = datetime(2017, 8, 11, 14, tzinfo=tz.gettz('US/Pacific'))
print_tzinfo(dt)
Explanation: Working with time zones
dateutil
In dateutil's suite of tzinfo objects, you can attach time zones in the constructor if you have a wall time:
End of explanation
print_tzinfo(dt.replace(tzinfo=tz.gettz('US/Eastern')))
Explanation: If you have a naive wall time, or a wall time in another zone that you want to translate without shifting the offset, use datetime.replace:
End of explanation
print_tzinfo(dt.astimezone(tz.gettz('US/Eastern')))
Explanation: If you have an absolute time, in UTC or otherwise, use datetime.astimezone():
End of explanation
print_tzinfo(dt.astimezone(pytz.timezone('US/Eastern')))
Explanation: pytz
In pytz, datetime.astimezone() still works exactly as expected:
End of explanation
print_tzinfo(dt.replace(tzinfo=pytz.timezone('US/Eastern')))
Explanation: But the constructor or .replace methods fail horribly:
End of explanation
LOS_p = pytz.timezone('America/Los_Angeles')
dt = LOS_p.localize(datetime(2017, 8, 11, 14, 0))
print_tzinfo(dt)
Explanation: pytz's time zone model
tzinfos are all static offsets
tzinfo is attached by the time zone object itself:
End of explanation
dt_add = dt + timedelta(days=180)
print_tzinfo(dt_add)
print_tzinfo(LOS_p.normalize(dt_add))
Explanation: You must normalize() datetimes after you've done some arithmetic on them:
End of explanation
dt1 = datetime(2004, 10, 31, 6, 30, tzinfo=UTC) # This is in the fold in EST
dt_dateutil = dt1.astimezone(tz.gettz('US/Eastern'))
dt_pytz = dt1.astimezone(pytz.timezone('US/Eastern'))
print(repr(dt_dateutil))
print_tzinfo(dt_dateutil)
print(repr(dt_pytz)) # Note that pytz doesn't set fold
print_tzinfo(dt_pytz)
Explanation: Handling ambiguous times
Overview
Both dateutil and pytz will automatically give you the right absolute time if converting from an absolute time.
End of explanation
dt = datetime(2004, 10, 31, 1, 30, tzinfo=tz.gettz('US/Eastern'))
tz.enfold(dt)
Explanation: dateutil
For backwards compatibility, dateutil provides a tz.enfold method to add a fold attribute if necessary:
End of explanation
tz.datetime_ambiguous(datetime(2004, 10, 31, 1, 30, tzinfo=NYC))
tz.datetime_ambiguous(datetime(2004, 10, 31, 1, 30), NYC)
dt_0 = datetime(2004, 10, 31, 0, 30, tzinfo=NYC)
for i in range(3):
dt_i = dt_0 + timedelta(hours=i)
dt_i = tz.enfold(dt_i, tz.datetime_ambiguous(dt_i))
print('{} (fold={})'.format(dt_i, dt_i.fold))
Explanation: ```python
Python 2.7.12
Type "help", "copyright", "credits" or "license" for more information.
from datetime import datetime
from dateutil import tz
dt = datetime(2004, 10, 31, 1, 30, tzinfo=tz.gettz('US/Eastern'))
tz.enfold(dt)
_DatetimeWithFold(2004, 10, 31, 1, 30, tzinfo=tzfile('/usr/share/zoneinfo/US/Eastern'))
tz.enfold(dt).tzname()
'EST'
dt.tzname()
'EDT'
```
dateutil
To detect ambiguous times, dateutil provides tz.datetime_ambiguous
End of explanation
for i in range(3):
dt_i = tz.enfold(dt_0 + timedelta(hours=i), fold=1)
print('{} (fold={})'.format(dt_i, dt_i.fold))
Explanation: Note: fold is ignored when the datetime is not ambiguous:
End of explanation
NYC_pytz = pytz.timezone('America/New_York')
dt_pytz = NYC_pytz.localize(datetime(2004, 10, 31, 1, 30))
print_tzinfo(dt_pytz)
Explanation: pytz
When localizing times, pytz defaults to standard time:
End of explanation
dt_pytz = NYC_pytz.localize(datetime(2004, 10, 31, 1, 30), is_dst=True)
print_tzinfo(dt_pytz)
Explanation: To get a time zone in daylight time, pass is_dst=True to localize:
End of explanation
for hour in (0, 1):
dt = datetime(2004, 10, 31, hour, 30)
try:
NYC_pytz.localize(dt, is_dst=None)
print('{} | {}'.format(dt, "Unambiguous"))
except pytz.AmbiguousTimeError:
print('{} | {}'.format(dt, "Ambiguous"))
Explanation: If is_dst=None is passed to localize, pytz raises an AmbiguousTimeError:
End of explanation
dt_0 = datetime(2004, 4, 4, 1, 30, tzinfo=NYC)
for i in range(3):
dt = dt_0 + timedelta(hours=i)
print('{} ({})'.format(dt, 'Exists' if tz.datetime_exists(dt) else 'Imaginary'))
Explanation: Handling imaginary times
dateutil
dateutil provides a tz.datetime_exists() function to tell you whether you've constructed an imaginary datetime:
End of explanation
def resolve_imaginary(dt): # This is a planned feature in dateutil 2.7.0
if dt.tzinfo is not None and not tz.datetime_exists(dt):
curr_offset = dt.utcoffset()
old_offset = (dt - timedelta(hours=24)).utcoffset()
dt += curr_offset - old_offset
return dt
print(resolve_imaginary(datetime(2004, 4, 4, 2, 30, tzinfo=NYC)))
Explanation: Generally for imaginary datetimes, you either want to skip over them or "slide forward":
End of explanation
print(NYC_pytz.localize(datetime(2004, 4, 4, 2, 30), is_dst=True))
print(NYC_pytz.localize(datetime(2004, 4, 4, 2, 30), is_dst=False))
Explanation: pytz
When using localize on an imaginary datetime, pytz will create an imaginary time and use is_dst to decide what offset to assign it:
End of explanation
dt_imag_dst = NYC_pytz.localize(datetime(2004, 4, 4, 2, 30), is_dst=True)
dt_imag_std = NYC_pytz.localize(datetime(2004, 4, 4, 2, 30), is_dst=False)
print(NYC_pytz.normalize(dt_imag_dst))
print(NYC_pytz.normalize(dt_imag_std))
Explanation: If you have a non-existent date, normalize() will slide it forward or backwards, depending on the value passed to is_dst (default is False):
End of explanation
dt_0 = datetime(2004, 4, 4, 1, 30)
for i in range(3):
try:
dt = NYC_pytz.localize(dt_0 + timedelta(hours=i), is_dst=None)
exists = True
except pytz.NonExistentTimeError:
exists = False
print('{} ({})'.format(dt, 'Exists' if exists else 'Imaginary'))
Explanation: pytz
If you pass is_dst=None, pytz will throw a NonExistentTimeError:
End of explanation
dt_pytz_real = NYC_pytz.localize(datetime(2004, 4, 4, 1, 30))
dt_pytz_imag = NYC_pytz.localize(datetime(2004, 4, 4, 2, 30))
print('Real: {}'.format(tz.datetime_exists(dt_pytz_real)))
print('Imaginary: {}'.format(tz.datetime_exists(dt_pytz_imag)))
Explanation: pytz
dateutil.tz.datetime_exists() works with pytz zones, too
End of explanation
dt_nn = dt_pytz_real + timedelta(hours=3) # Needs to be normalized to DST
print('{}: {}'.format(dt_nn, 'Exists' if tz.datetime_exists(dt_nn) else 'Imaginary'))
Explanation: And will detect non-normalized datetimes:
End of explanation
# tz.tzutc() is equivalent to pytz.UTC or timezone.utc
dt = datetime(2014, 12, 19, 22, 30, tzinfo=tz.tzutc())
print_tzinfo(dt)
Explanation: dateutil's tzinfo implementations
UTC and Static time zones
End of explanation
JST = tzoffset('JST', 32400) # Japan Standard Time is year round
IST = tzoffset('IST', # India Standard Time is year round
timedelta(hours=5, minutes=30))
EST = tzoffset(None, timedelta(hours=-5)) # Can use None as a name
dt = datetime(2016, 7, 17, 12, 15, tzinfo=tzutc())
print_tzinfo(dt.astimezone(JST))
print_tzinfo(dt.astimezone(IST))
print_tzinfo(dt.astimezone(EST))
Explanation: Static offsets represent zones with a fixed offset from UTC, and takes a tzname or either number of seconds or a timedelta:
End of explanation
from datetime import timezone
dt = datetime(2014, 12, 19, 22, 30, tzinfo=timezone.utc) # Equivalent to pytz.UTC or dateutil.tz.tzutc()
print_tzinfo(dt)
JST = timezone(timedelta(hours=9), 'JST') # Japan Standard Time is year round
IST = timezone(timedelta(hours=5, minutes=30), # India Standard Time is year round
'IST')
EST = timezone(timedelta(hours=-5)) # Without a name, it's UTC-hh:mm
dt = datetime(2016, 7, 17, 12, 15, tzinfo=tzutc())
print_tzinfo(dt.astimezone(JST)); print()
print_tzinfo(dt.astimezone(IST)); print()
print_tzinfo(dt.astimezone(EST))
Explanation: UTC and Static time zones
In Python 3.2, timezone objects were introduced to provide ready-made tzinfo subclasses for the simple case of static offsets from UTC.
End of explanation
# Temporarily changes the TZ file on *nix systems.
from helper_functions import TZEnvContext
print_tzinfo(dt.astimezone(tz.tzlocal()))
with TZEnvContext('UTC'):
print_tzinfo(dt.astimezone(tz.tzlocal()))
with TZEnvContext('PST8PDT'):
print_tzinfo((dt + timedelta(days=180)).astimezone(tz.tzlocal()))
Explanation: Local time
The tz.tzlocal() class is a tzinfo implementation that uses the OS hooks in Python's time module to get the local system time.
End of explanation
NYC = tz.gettz('America/New_York')
NYC
Explanation: Local time: Windows
tz.win.tzwinlocal() directly queries the Windows registry for its time zone data and uses that to construct a tzinfo.
Fixes this bug:
```python
dt = datetime(2014, 2, 11, 17, 0)
print(dt.replace(tzinfo=tz.tzlocal()).tzname())
Eastern Standard Time
print(dt.replace(tzinfo=tz.win.tzwinlocal()).tzname())
Eastern Standard Time
with TZWinContext('Pacific Standard Time'):
... print(dt.replace(tzinfo=tz.tzlocal()).tzname())
... print(dt.replace(tzinfo=tz.win.tzwinlocal()).tzname())
Eastern Standard Time
Pacific Standard Time
```
IANA (Olson) database
The dateutil.tz.tzfile class provides support for IANA zoneinfo binaries (shipped with *nix systems).
<br/><br/>
DO NOT USE tz.tzfile directly - use tz.gettz()
End of explanation
print_tzinfo(datetime(2017, 8, 12, 14, tzinfo=NYC)) # Eastern Daylight Time
print_tzinfo(datetime(1944, 1, 6, 12, 15, tzinfo=NYC)) # Eastern War Time
print_tzinfo(datetime(1901, 9, 6, 16, 7, tzinfo=NYC)) # Local solar mean
Explanation: The IANA database contains historical time zone transitions:
End of explanation
tz.gettz() # Passing nothing gives you local time
# If your TZSTR is an an Olson file, it is prioritized over the /etc/localtime tzfile.
with TZEnvContext('CST6CDT'):
print(gettz())
# If it doesn't find a tzfile, but it finds a valid abbreviation for the local zone,
# it returns tzlocal()
with TZEnvContext('LMT4'):
print(gettz('LMT'))
# Retrieve IANA zone:
print(gettz('Pacific/Kiritimati'))
# Directly parse a TZ variable:
print(gettz('AEST-10AEDT-11,M10.1.0/2,M4.1.0/3'))
Explanation: tz.gettz()
The most general way to get a time zone is to pass the relevant timezone string to the gettz() function, which will try parsing it a number of different ways until it finds a relevant string.
End of explanation |
8,480 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dependence on primary cosmic ray flux
Step1: Create an instance of an MCEqRun class. Most options are defined in the mceq_config module, and do not require change. Look into mceq_config.py or use the documentation.
If the initialization succeeds it will print out some information according to the debug level.
Step2: Solve and store results
This code below computes fluxes of neutrinos, averaged over all directions, for different primary models.
Step3: Plot with matplotlib
Step4: Save as in ASCII file for other types of processing | Python Code:
import matplotlib.pyplot as plt
import numpy as np
#import solver related modules
from MCEq.core import MCEqRun
import mceq_config as config
#import primary model choices
import crflux.models as pm
Explanation: Dependence on primary cosmic ray flux
End of explanation
mceq_run = MCEqRun(
#provide the string of the interaction model
interaction_model='SIBYLL2.3c',
#primary cosmic ray flux model
#support a tuple (primary model class (not instance!), arguments)
primary_model=(pm.HillasGaisser2012, "H3a"),
# Zenith angle in degrees. 0=vertical, 90=horizontal
theta_deg=0.0
)
Explanation: Create an instance of an MCEqRun class. Most options are defined in the mceq_config module, and do not require change. Look into mceq_config.py or use the documentation.
If the initialization succeeds it will print out some information according to the debug level.
End of explanation
# Bump up the debug level to see what the calculation is doing
config.debug_level = 2
#Define equidistant grid in cos(theta)
angles = np.arccos(np.linspace(1,0,11))*180./np.pi
#Power of energy to scale the flux
mag = 3
#obtain energy grid (nver changes) of the solution for the x-axis of the plots
e_grid = mceq_run.e_grid
p_spectrum_flux = []
#Initialize empty grid
for pmcount, pmodel in enumerate([(pm.HillasGaisser2012,'H3a'),
(pm.HillasGaisser2012,'H4a'),
(pm.GaisserStanevTilav,'3-gen'),
(pm.GaisserStanevTilav,'4-gen')]):
mceq_run.set_primary_model(*pmodel)
flux = {}
for frac in ['mu_conv','mu_pr','mu_total',
'numu_conv','numu_pr','numu_total',
'nue_conv','nue_pr','nue_total','nutau_pr']:
flux[frac] = np.zeros_like(e_grid)
#Sum fluxes, calculated for different angles
for theta in angles:
mceq_run.set_theta_deg(theta)
mceq_run.solve()
#_conv means conventional (mostly pions and kaons)
flux['mu_conv'] += (mceq_run.get_solution('conv_mu+', mag)
+ mceq_run.get_solution('conv_mu-', mag))
# _pr means prompt (the mother of the muon had a critical energy
# higher than a D meson. Includes all charm and direct resonance
# contribution)
flux['mu_pr'] += (mceq_run.get_solution('pr_mu+', mag)
+ mceq_run.get_solution('pr_mu-', mag))
# total means conventional + prompt
flux['mu_total'] += (mceq_run.get_solution('total_mu+', mag)
+ mceq_run.get_solution('total_mu-', mag))
# same meaning of prefixes for muon neutrinos as for muons
flux['numu_conv'] += (mceq_run.get_solution('conv_numu', mag)
+ mceq_run.get_solution('conv_antinumu', mag))
flux['numu_pr'] += (mceq_run.get_solution('pr_numu', mag)
+ mceq_run.get_solution('pr_antinumu', mag))
flux['numu_total'] += (mceq_run.get_solution('total_numu', mag)
+ mceq_run.get_solution('total_antinumu', mag))
# same meaning of prefixes for electron neutrinos as for muons
flux['nue_conv'] += (mceq_run.get_solution('conv_nue', mag)
+ mceq_run.get_solution('conv_antinue', mag))
flux['nue_pr'] += (mceq_run.get_solution('pr_nue', mag)
+ mceq_run.get_solution('pr_antinue', mag))
flux['nue_total'] += (mceq_run.get_solution('total_nue', mag)
+ mceq_run.get_solution('total_antinue', mag))
# since there are no conventional tau neutrinos, prompt=total
flux['nutau_pr'] += (mceq_run.get_solution('total_nutau', mag)
+ mceq_run.get_solution('total_antinutau', mag))
#average the results
for frac in ['mu_conv','mu_pr','mu_total',
'numu_conv','numu_pr','numu_total',
'nue_conv','nue_pr','nue_total','nutau_pr']:
flux[frac] = flux[frac]/float(len(angles))
p_spectrum_flux.append((flux,mceq_run.pmodel.sname,mceq_run.pmodel.name))
Explanation: Solve and store results
This code below computes fluxes of neutrinos, averaged over all directions, for different primary models.
End of explanation
#get path of the home directory + Desktop
desktop = os.path.join(os.path.expanduser("~"),'Desktop')
for pref, lab in [('numu_',r'\nu_\mu'),
('mu_',r'\mu'),
('nue_',r'\nu_e')
]:
plt.figure(figsize=(4.5, 3.5))
for (flux, p_sname, p_name), col in zip(p_spectrum_flux,['k','r','g','b','c']):
plt.loglog(e_grid, flux[pref + 'total'], color=col, ls='-', lw=2.5,
label=p_sname, alpha=0.4)
plt.loglog(e_grid, flux[pref + 'conv'], color=col, ls='--', lw=1,
label='_nolabel_')
plt.loglog(e_grid, flux[pref + 'pr'], color=col,ls='-', lw=1,
label='_nolabel_')
plt.xlim(50,1e9)
plt.ylim(1e-5,1)
plt.xlabel(r"$E_{{{0}}}$ [GeV]".format(lab))
plt.ylabel(r"$\Phi_{" + lab + "}$ (E/GeV)$^{" + str(mag) +" }$" +
"(cm$^{2}$ s sr GeV)$^{-1}$")
plt.legend(loc='upper right')
plt.tight_layout()
# Uncoment if you want to save the plot
# plt.savefig(os.path.join(desktop,pref + 'flux.pdf'))
Explanation: Plot with matplotlib
End of explanation
for (flux, p_sname, p_name) in p_spectrum_flux:
np.savetxt(open(os.path.join(desktop, 'numu_flux_' + p_sname + '.txt'),'w'),
zip(e_grid,
flux['mu_conv'],flux['mu_pr'],flux['mu_total'],
flux['numu_conv'],flux['numu_pr'],flux['numu_total'],
flux['nue_conv'],flux['nue_pr'],flux['nue_total'],
flux['nutau_pr']),
fmt='%6.5E',
header=('lepton flux scaled with E**{0}. Order (E, mu_conv, mu_pr, mu_total, ' +
'numu_conv, numu_pr, numu_total, nue_conv, nue_pr, nue_total, ' +
'nutau_pr').format(mag)
)
Explanation: Save as in ASCII file for other types of processing
End of explanation |
8,481 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 07 - Non linear Elliptic problem
Keywords
Step1: 3. Affine Decomposition
For this problem the affine decomposition is straightforward
Step2: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
Step3: 4.2. Create Finite Element space (Lagrange P1)
Step4: 4.3. Allocate an object of the NonlinearElliptic class
Step5: 4.4. Prepare reduction with a POD-Galerkin method
Step6: 4.5. Perform the offline phase
Step7: 4.6. Perform an online solve
Step8: 4.7. Perform an error analysis
Step9: 4.8. Perform a speedup analysis | Python Code:
from dolfin import *
from rbnics import *
Explanation: Tutorial 07 - Non linear Elliptic problem
Keywords: EIM, POD-Galerkin
1. Introduction
In this tutorial, we consider a non linear elliptic problem in a two-dimensional spatial domain $\Omega=(0,1)^2$. We impose a homogeneous Dirichlet condition on the boundary $\partial\Omega$. The source term is characterized by the following expression
$$
g(\boldsymbol{x}; \boldsymbol{\mu}) = 100\sin(2\pi x_0)cos(2\pi x_1) \quad \forall \boldsymbol{x} = (x_0, x_1) \in \Omega.
$$
This problem is characterized by two parameters. The first parameter $\mu_0$ controls the strength of the sink term and the second parameter $\mu_1$ the strength of the nonlinearity. The range of the two parameters is the following:
$$
\mu_0,\mu_1\in[0.01,10.0]
$$
The parameter vector $\boldsymbol{\mu}$ is thus given by
$$
\boldsymbol{\mu} = (\mu_0,\mu_1)
$$
on the parameter domain
$$
\mathbb{P}=[0.01,10]^2.
$$
In order to obtain a faster approximation of the problem, we pursue a model reduction by means of a POD-Galerkin reduced order method. In order to preserve the affinity assumption empirical interpolation method will be used on the forcing term $g(\boldsymbol{x}; \boldsymbol{\mu})$.
2. Parametrized formulation
Let $u(\boldsymbol{\mu})$ be the solution in the domain $\Omega$.
The strong formulation of the parametrized problem is given by:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})$ such that</center>
$$ -\nabla^2u(\boldsymbol{\mu})+\frac{\mu_0}{\mu_1}(\exp{\mu_1u(\boldsymbol{\mu})}-1)=g(\boldsymbol{x}; \boldsymbol{\mu})$$
<br>
The corresponding weak formulation reads:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})\in\mathbb{V}$ such that</center>
$$a\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)+c\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)=f(v;\boldsymbol{\mu})\quad \forall v\in\mathbb{V}$$
where
the function space $\mathbb{V}$ is defined as
$$
\mathbb{V} = {v\in H_1(\Omega) : v|_{\partial\Omega}=0}
$$
the parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$a(u, v;\boldsymbol{\mu})=\int_{\Omega} \nabla u\cdot \nabla v \ d\boldsymbol{x},$$
the parametrized bilinear form $c(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$c(u, v;\boldsymbol{\mu})=\mu_0\int_{\Omega} \frac{1}{\mu_1}\big(\exp{\mu_1u} - 1\big)v \ d\boldsymbol{x},$$
the parametrized linear form $f(\cdot; \boldsymbol{\mu}): \mathbb{V} \to \mathbb{R}$ is defined by
$$f(v; \boldsymbol{\mu})= \int_{\Omega}g(\boldsymbol{x}; \boldsymbol{\mu})v \ d\boldsymbol{x}.$$
The output of interest $s(\boldsymbol{\mu})$ is given by
$$s(\boldsymbol{\mu}) = \int_{\Omega} v \ d\boldsymbol{x}$$
is computed for each $\boldsymbol{\mu}$.
End of explanation
@EIM("online")
@ExactParametrizedFunctions("offline")
class NonlinearElliptic(NonlinearEllipticProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
NonlinearEllipticProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.du = TrialFunction(V)
self.u = self._solution
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=self.subdomains)
self.ds = Measure("ds")(subdomain_data=self.boundaries)
# Store the forcing term expression
self.f = Expression("sin(2*pi*x[0])*sin(2*pi*x[1])", element=self.V.ufl_element())
# Customize nonlinear solver parameters
self._nonlinear_solver_parameters.update({
"linear_solver": "mumps",
"maximum_iterations": 20,
"report": True
})
# Return custom problem name
def name(self):
return "NonlinearEllipticEIM"
# Return theta multiplicative terms of the affine expansion of the problem.
@compute_theta_for_derivatives
def compute_theta(self, term):
mu = self.mu
if term == "a":
theta_a0 = 1.
return (theta_a0,)
elif term == "c":
theta_c0 = mu[0]
return (theta_c0,)
elif term == "f":
theta_f0 = 100.
return (theta_f0,)
elif term == "s":
theta_s0 = 1.0
return (theta_s0,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "a":
du = self.du
a0 = inner(grad(du), grad(v)) * dx
return (a0,)
elif term == "c":
u = self.u
mu = self.mu
c0 = (exp(mu[1] * u) - 1) / mu[1] * v * dx
return (c0,)
elif term == "dc": # preferred over derivative() computation which does not cancel out trivial mu[1] factors
du = self.du
u = self.u
mu = self.mu
dc0 = exp(mu[1] * u) * du * v * dx
return (dc0,)
elif term == "f":
f = self.f
f0 = f * v * dx
return (f0,)
elif term == "s":
s0 = v * dx
return (s0,)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1)]
return (bc0,)
elif term == "inner_product":
du = self.du
x0 = inner(grad(du), grad(v)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
# Customize the resulting reduced problem
@CustomizeReducedProblemFor(NonlinearEllipticProblem)
def CustomizeReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):
class ReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):
def __init__(self, truth_problem, **kwargs):
ReducedNonlinearElliptic_Base.__init__(self, truth_problem, **kwargs)
self._nonlinear_solver_parameters.update({
"report": True,
"line_search": "wolfe"
})
return ReducedNonlinearElliptic
Explanation: 3. Affine Decomposition
For this problem the affine decomposition is straightforward:
$$a(u,v;\boldsymbol{\mu})=\underbrace{1}{\Theta^{a}_0(\boldsymbol{\mu})}\underbrace{\int{\Omega}\nabla u \cdot \nabla v \ d\boldsymbol{x}}{a_0(u,v)},$$
$$c(u,v;\boldsymbol{\mu})=\underbrace{\mu_0}{\Theta^{c}0(\boldsymbol{\mu})}\underbrace{\int{\Omega}\frac{1}{\mu_1}\big(\exp{\mu_1u} - 1\big)v \ d\boldsymbol{x}}{c_0(u,v)},$$
$$f(v; \boldsymbol{\mu}) = \underbrace{100}{\Theta^{f}0(\boldsymbol{\mu})} \underbrace{\int{\Omega}\sin(2\pi x_0)cos(2\pi x_1)v \ d\boldsymbol{x}}{f_0(v)}.$$
We will implement the numerical discretization of the problem in the class
class NonlinearElliptic(NonlinearEllipticProblem):
by specifying the coefficients $\Theta^{a}(\boldsymbol{\mu})$, $\Theta^{c}_(\boldsymbol{\mu})$ and $\Theta^{f}(\boldsymbol{\mu})$ in the method
def compute_theta(self, term):
and the bilinear forms $a_(u, v)$, $c(u, v)$ and linear forms $f_(v)$ in
def assemble_operator(self, term):
End of explanation
mesh = Mesh("data/square.xml")
subdomains = MeshFunction("size_t", mesh, "data/square_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/square_facet_region.xml")
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
End of explanation
V = FunctionSpace(mesh, "Lagrange", 1)
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
problem = NonlinearElliptic(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(0.01, 10.0), (0.01, 10.0)]
problem.set_mu_range(mu_range)
Explanation: 4.3. Allocate an object of the NonlinearElliptic class
End of explanation
reduction_method = PODGalerkin(problem)
reduction_method.set_Nmax(20, EIM=21)
reduction_method.set_tolerance(1e-8, EIM=1e-4)
Explanation: 4.4. Prepare reduction with a POD-Galerkin method
End of explanation
reduction_method.initialize_training_set(50, EIM=60)
reduced_problem = reduction_method.offline()
Explanation: 4.5. Perform the offline phase
End of explanation
online_mu = (0.3, 9.0)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem)
Explanation: 4.6. Perform an online solve
End of explanation
reduction_method.initialize_testing_set(50, EIM=60)
reduction_method.error_analysis()
Explanation: 4.7. Perform an error analysis
End of explanation
reduction_method.speedup_analysis()
Explanation: 4.8. Perform a speedup analysis
End of explanation |
8,482 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 1, Table 1
This notebook explains how I used the Harvard General Inquirer to streamline interpretation of a predictive model.
I'm italicizing the word "streamline" because I want to emphasize that I place very little weight on the Inquirer
Step1: Loading the General Inquirer.
This takes some doing, because the General Inquirer doesn't start out as a set of wordlists. I have to translate it into that form.
I start by loading an English dictionary.
Step2: The next stage is to translate the Inquirer. It begins as a table where word senses are row labels, and the Inquirer categories are columns (except for two columns at the beginning and two at the end). This is, by the way, the "basic spreadsheet" described at this site
Step3: Load model predictions about volumes
The next step is to create some vectors that store predictions about volumes. In this case, these are predictions about the probability that a volume is fiction, rather than biography.
Step4: And get the wordcounts themselves
This cell of the notebook is very short (one line), but it takes a lot of time to execute. There's a lot of file i/o that happens inside the function get_wordcounts, in the FileCabinet module, which is invoked here. We come away with a dictionary of wordcounts, keyed in the first instance by volume ID.
Step5: Now calculate the representation of each Inquirer category in each doc
We normalize by the total wordcount for a volume.
This cell also takes a long time to run. I've added a counter so you have some confidence that it's still running.
Step6: Calculate correlations
Now that we have all the information, calculating correlations is easy. We iterate through Inquirer categories, in each case calculating the correlation between a vector of model predictions for docs, and a vector of category-frequencies for docs.
Step7: Load expanded names of Inquirer categories
The terms used in the inquirer spreadsheet are not very transparent. DAV for instance is "descriptive action verbs." BodyPt is "body parts." To make these more transparent, I have provided expanded names for many categories that turned out to be relevant in the book, trying to base my description on the accounts provided here
Step8: Print results
I print the top 12 correlations and the bottom 12, skipping categories that are drawn from the "Laswell value dictionary." The Laswell categories are very finely discriminated (things like "enlightenment gain" or "power loss"), and I have little faith that they're meaningful. I especially doubt that they could remain meaningful when the Inquirer is used crudely as a source of wordlists. | Python Code:
# some standard modules
import csv, os, sys
from collections import Counter
import numpy as np
from scipy.stats import pearsonr
# now a module that I wrote myself, located
# a few directories up, in the software
# library for this repository
sys.path.append('../../lib')
import FileCabinet as filecab
Explanation: Chapter 1, Table 1
This notebook explains how I used the Harvard General Inquirer to streamline interpretation of a predictive model.
I'm italicizing the word "streamline" because I want to emphasize that I place very little weight on the Inquirer: as I say in the text, "The General Inquirer has no special authority, and I have tried not to make it a load-bearing element of this argument."
To interpret a model, I actually spend a lot of time looking at lists of features, as well as predictions about individual texts. But to explain my interpretation, I need some relatively simple summary. Given real-world limits on time and attention, going on about lists of individual words for five pages is rarely an option. So, although wordlists are crude and arbitrary devices, flattening out polysemy and historical change, I am willing to lean on them rhetorically, where I find that they do in practice echo observations I have made in other ways.
I should also acknowledge that I'm not using the General Inquirer as it was designed to be used. The full version of this tool is not just a set of wordlists, it's a software package that tries to get around polysemy by disambiguating different word senses. I haven't tried to use it in that way: I think it would complicate my explanation, in order to project an impression of accuracy and precision that I don't particularly want to project. Instead, I have stressed that word lists are crude tools, and I'm using them only as crude approximations.
That said, how do I do it?
To start with, we'll load an array of modules. Some standard, some utilities that I've written myself.
End of explanation
# start by loading the dictionary
dictionary = set()
with open('../../lexicons/MainDictionary.txt', encoding = 'utf-8') as f:
reader = csv.reader(f, delimiter = '\t')
for row in reader:
word = row[0]
count = int(row[2])
if count < 10000:
continue
# that ignores very rare words
# we end up with about 42,700 common ones
else:
dictionary.add(word)
Explanation: Loading the General Inquirer.
This takes some doing, because the General Inquirer doesn't start out as a set of wordlists. I have to translate it into that form.
I start by loading an English dictionary.
End of explanation
inquirer = dict()
suffixes = dict()
suffixes['verb'] = ['s', 'es', 'ed', 'd', 'ing']
suffixes['noun'] = ['s', 'es']
allinquirerwords = set()
with open('../../lexicons/inquirerbasic.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
fields = reader.fieldnames[2:-2]
for field in fields:
inquirer[field] = set()
for row in reader:
term = row['Entry']
if '#' in term:
parts = term.split('#')
word = parts[0].lower()
sense = int(parts[1].strip('_ '))
partialsense = True
else:
word = term.lower()
sense = 0
partialsense = False
if sense > 1:
continue
# we're ignoring uncommon senses
pos = row['Othtags']
if 'Noun' in pos:
pos = 'noun'
elif 'SUPV' in pos:
pos = 'verb'
forms = {word}
if pos == 'noun' or pos == 'verb':
for suffix in suffixes[pos]:
if word + suffix in dictionary:
forms.add(word + suffix)
if pos == 'verb' and word.rstrip('e') + suffix in dictionary:
forms.add(word.rstrip('e') + suffix)
for form in forms:
for field in fields:
if len(row[field]) > 1:
inquirer[field].add(form)
allinquirerwords.add(form)
print('Inquirer loaded')
print('Total of ' + str(len(allinquirerwords)) + " words.")
Explanation: The next stage is to translate the Inquirer. It begins as a table where word senses are row labels, and the Inquirer categories are columns (except for two columns at the beginning and two at the end). This is, by the way, the "basic spreadsheet" described at this site:
http://www.wjh.harvard.edu/~inquirer/spreadsheet_guide.htm
I translate this into a dictionary where the keys are Inquirer categories, and the values are sets of words associated with each category.
But to do that, I have to do some filtering and expanding. Different senses of a word are broken out in the spreadsheet thus:
ABOUT#1
ABOUT#2
ABOUT#3
etc.
I need to separate the hashtag part. Also, because I don't want to allow rare senses of a word too much power, I ignore everything but the first sense of a word.
However, I also want to allow singular verb forms and plural nouns to count. So there's some code below that expands words by adding -s -ed, etc to the end. See the suffixes dictionary defined below for more details.
End of explanation
# the folder where wordcounts will live
# we're only going to load predictions
# that correspond to files located there
sourcedir = '../sourcefiles/'
docs = []
logistic = []
with open('../plotdata/the900.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
genre = row['realclass']
docid = row['volid']
if not os.path.exists(sourcedir + docid + '.tsv'):
continue
docs.append(row['volid'])
logistic.append(float(row['logistic']))
logistic = np.array(logistic)
numdocs = len(docs)
assert numdocs == len(logistic)
print("We have information about " + str(numdocs) + " volumes.")
Explanation: Load model predictions about volumes
The next step is to create some vectors that store predictions about volumes. In this case, these are predictions about the probability that a volume is fiction, rather than biography.
End of explanation
wordcounts = filecab.get_wordcounts(sourcedir, '.tsv', docs)
Explanation: And get the wordcounts themselves
This cell of the notebook is very short (one line), but it takes a lot of time to execute. There's a lot of file i/o that happens inside the function get_wordcounts, in the FileCabinet module, which is invoked here. We come away with a dictionary of wordcounts, keyed in the first instance by volume ID.
End of explanation
# Initialize empty category vectors
categories = dict()
for field in fields:
categories[field] = np.zeros(numdocs)
# Now fill them
for i, doc in enumerate(docs):
ctcat = Counter()
allcats = 0
for word, count in wordcounts[doc].items():
if word in dictionary:
allcats += count
if word not in allinquirerwords:
continue
for field in fields:
if word in inquirer[field]:
ctcat[field] += count
for field in fields:
categories[field][i] = ctcat[field] / (allcats + 0.1)
# Laplacian smoothing there to avoid div by zero, among other things.
if i % 100 == 1:
print(i, allcats)
Explanation: Now calculate the representation of each Inquirer category in each doc
We normalize by the total wordcount for a volume.
This cell also takes a long time to run. I've added a counter so you have some confidence that it's still running.
End of explanation
logresults = []
for inq_category in fields:
l = pearsonr(logistic, categories[inq_category])[0]
logresults.append((l, inq_category))
logresults.sort()
Explanation: Calculate correlations
Now that we have all the information, calculating correlations is easy. We iterate through Inquirer categories, in each case calculating the correlation between a vector of model predictions for docs, and a vector of category-frequencies for docs.
End of explanation
short2long = dict()
with open('../../lexicons/long_inquirer_names.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
short2long[row['short_name']] = row['long_name']
Explanation: Load expanded names of Inquirer categories
The terms used in the inquirer spreadsheet are not very transparent. DAV for instance is "descriptive action verbs." BodyPt is "body parts." To make these more transparent, I have provided expanded names for many categories that turned out to be relevant in the book, trying to base my description on the accounts provided here: http://www.wjh.harvard.edu/~inquirer/homecat.htm
We load these into a dictionary.
End of explanation
print('Printing the correlations of General Inquirer categories')
print('with the predicted probabilities of being fiction in allsubset2.csv:')
print()
print('First, top positive correlations: ')
print()
for prob, n in reversed(logresults[-12 : ]):
if n in short2long:
n = short2long[n]
if 'Laswell' in n:
continue
else:
print(str(prob) + '\t' + n)
print()
print('Now, negative correlations: ')
print()
for prob, n in logresults[0 : 12]:
if n in short2long:
n = short2long[n]
if 'Laswell' in n:
continue
else:
print(str(prob) + '\t' + n)
Explanation: Print results
I print the top 12 correlations and the bottom 12, skipping categories that are drawn from the "Laswell value dictionary." The Laswell categories are very finely discriminated (things like "enlightenment gain" or "power loss"), and I have little faith that they're meaningful. I especially doubt that they could remain meaningful when the Inquirer is used crudely as a source of wordlists.
End of explanation |
8,483 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Images and TensorFlow
TensorFlow is designed to support working with images as input to neural networks. TensorFlow supports loading common file formats (JPG, PNG), working in different color spaces (RGB, RGBA) and common image manipulation tasks. TensorFlow makes it easier to work with images but it's still a challenge. The largest challenge working with images are the size of the tensor which is eventually loaded. Every image requires a tensor the same size as the image's <span class="math-tex" data-type="tex">\(height * width * channels\)</span>. As a reminder, channels are represented as a rank 1 tensor including a scalar amount of color in each channel.
A red RGB pixel in TensorFlow would be represented with the following tensor.
Step1: Each scalar can be changed to make the pixel another color or a mix of colors. The rank 1 tensor of a pixel is in the format of [red, green, blue] for an RGB color space. All the pixels in an image are stored in files on a disk which need to be read into memory so TensorFlow may operate on them.
Loading images
TensorFlow is designed to make it easy to load files from disk quickly. Loading images is the same as loading any other large binary file until the contents are decoded. Loading this example 3x3 pixel RGB JPG image is done using a similar process to loading any other type of file.
Step2: The image, which is assumed to be located in a relative directory from where this code is ran. An input producer (tf.train.string_input_producer) finds the files and adds them to a queue for loading. Loading an image requires loading the entire file into memory (tf.WholeFileReader) and onces a file has been read (image_reader.read) the resulting image is decoded (tf.image.decode_jpeg).
Now the image can be inspected, since there is only one file by that name the queue will always return the same image.
Step3: Inspect the output from loading an image, notice that it's a fairly simple rank 3 tensor. The RGB values are found in 9 rank 1 tensors. The higher rank of the image should be familiar from earlier sections. The format of the image loaded in memory is now [batch_size, image_height, image_width, channels].
The batch_size in this example is 1 because there are no batching operations happening. Batching of input is covered in the TensorFlow documentation with a great amount of detail. When dealing with images, note the amount of memory required to load the raw images. If the images are too large or too many are loaded in a batch, the system may stop responding.
Image Formats
It's important to consider aspects of images and how they affect a model. Consider what would happen if a network is trained with input from a single frame of a RED Weapon Camera, which at the time of writing this, has an effective pixel count of 6144x3160. That'd be 19,415,040 rank one tensors with 3 dimensions of color information.
Practically speaking, an input of that size will use a huge amount of system memory. Training a CNN takes a large amount of time and loading very large files slow it down more. Even if the increase in time is acceptable, the size a single image would be hard to fit in memory on the majority of system's GPUs.
A large input image is counterproductive to training most CNNs as well. The CNN is attempting to find inherent attributes in an image, which are unique but generalized so that they may be applied to other images with similar results. Using a large input is flooding a network with irrelevant information which will keep from generalizing the model.
In the Stanford Dogs Dataset there are two extremely different images of the same dog breed which should both match as a Pug. Although cute, these images are filled with useless information which mislead a network during training. For example, the hat worn by the Pug in n02110958_4030.jpg isn't a feature a CNN needs to learn in order to match a Pug. Most Pugs prefer pirate hats so the jester hat is training the network to match a hat which most Pugs don't wear.
Highlighting important information in images is done by storing them in an appropriate file format and manipulating them. Different formats can be used to solve different problems encountered while working with images.
JPEG and PNG
TensorFlow has two image formats used to decode image data, one is tf.image.decode_jpeg and the other is tf.image.decode_png. These are common file formats in computer vision applications because they're trivial to convert other formats to.
Something important to keep in mind, JPEG images don't store any alpha channel information and PNG images do. This could be important if what you're training on requires alpha information (transparency). An example usage scenario is one where you've manually cut out some pieces of an image, for example, irrelevant jester hats on dogs. Setting those pieces to black would make them seem of similar importance to other black colored items in the image. Setting the removed hat to have an alpha of 0 would help in distinguishing its removal.
When working with JPEG images, don't manipulate them too much because it'll leave artifacts. Instead, plan to take raw images and export them to JPEG while doing any manipulation needed. Try to manipulate images before loading them whenever possible to save time in training.
PNG images work well if manipulation is required. PNG format is lossless so it'll keep all the information from the original file (unless they've been resized or downsampled). The downside to PNGs is that the files are larger than their JPEG counterpart.
TFRecord
TensorFlow has a built-in file format designed to keep binary data and label (category for training) data in the same file. The format is called TFRecord and the format requires a preprocessing step to convert images to a TFRecord format before training. The largest benefit is keeping each input image in the same file as the label associated with it.
Technically, TFRecord files are protobuf formatted files. They are great for use as a preprocessed format because they aren't compressed and can be loaded into memory quickly. In this example, an image is written to a new TFRecord formatted file and it's label is stored as well.
Step4: The label is in a format known as one-hot encoding which is a common way to work with label data for categorization of multi-class data. The Stanford Dogs Dataset is being treated as multi-class data because the dogs are being categorized as a single breed and not a mix of breeds. In the real world, a multilabel solution would work well to predict dog breeds because it'd be capable of matching a dog with multiple breeds.
In the example code, the image is loaded into memory and converted into an array of bytes. The bytes are then added to the tf.train.Example file which are serialized SerializeToString before storing to disk. Serialization is a way of converting the in memory object into a format safe to be transferred to a file. The serialized example is now saved in a format which can be loaded and deserialized back to the example format saved here.
Now that the image is saved as a TFRecord it can be loaded again but this time from the TFRecord file. This would be the loading required in a training step to load the image and label for training. This will save time from loading the input image and its corresponding label separately.
Step5: At first, the file is loaded in the same way as any other file. The main difference is that the file is then read using a TFRecordReader. Instead of decoding the image, the TFRecord is parsed tf.parse_single_example and then the image is read as raw bytes (tf.decode_raw).
After the file is loaded, it is reshaped (tf.reshape) in order to keep it in the same layout as tf.nn.conv2d expects it [image_height, image_width, image_channels]. It'd be save to expand the dimensions (tf.expand) in order to add in the batch_size dimension to the input_batch.
In this case a single image is in the TFRecord but these record files support multiple examples being written to them. It'd be safe to have a single TFRecord file which stores an entire training set but splitting up the files doesn't hurt.
The following code is useful to check that the image saved to disk is the same as the image which was loaded from TensorFlow.
Step6: All of the attributes of the original image and the image loaded from the TFRecord file are the same. To be sure, load the label from the TFRecord file and check that it is the same as the one saved earlier.
Step7: Creating a file which stores both the raw image data and the expected output label will save complexities during training. It's not required to use TFRecord files but it's highly recommend when working with images. If it doesn't work well for a workflow, it's still recommended to preprocess images and save them before training. Manipulating an image each time it's loaded is not recommended.
Image Manipulation
CNNs work well when they're given a large amount of diverse quality training data. Images capture complex scenes in a way which visually communicates an intended subject. In the Stanford Dog's Dataset, it's important that the images visually highlight the importance of dogs in the picture. A picture with a dog clearly visible in the center is considered more valuable than one with a dog in the background.
Not all datasets have the most valuable images. The following are two images from the Stanford Dogs Dataset which are supposed to highlight dog breeds. The image on the left n02113978_3480.jpg highlights important attributes of a typical Mexican Hairless Dog, while the image on the right n02113978_1030.jpg highlights the look of inebriated party goers scaring a Mexican Hairless Dog. The image on the right n02113978_1030.jpg is filled with irrelevant information which may train a CNN to categorize party goer faces instead of Mexican Hairless Dog breeds. Images like this may still include an image of a dog and could be manipulated to highlight the dog instead of people.
Image manipulation is best done as a preprocessing step in most scenarios. An image can be cropped, resized and the color levels adjusted. On the other hand, there is an important use case for manipulating an image while training. After an image is loaded, it can be flipped or distorted to diversify the input training information used with the network. This step adds further processing time but helps with overfitting.
TensorFlow is not designed as an image manipulation framework. There are libraries available in Python which support more image manipulation than TensorFlow (PIL and OpenCV). For TensorFlow, we'll summarize a few useful image manipulation features available which are useful in training CNNs.
Cropping
Cropping an image will remove certain regions of the image without keeping any information. Cropping is similar to tf.slice where a section of a tensor is cut out from the full tensor. Cropping an input image for a CNN can be useful if there is extra input along a dimension which isn't required. For example, cropping dog pictures where the dog is in the center of the images to reduce the size of the input.
Step8: The example code uses tf.image.central_crop to crop out 10% of the image and return it. This method always returns based on the center of the image being used.
Cropping is usually done in preprocessing but it can be useful when training if the background is useful. When the background is useful then cropping can be done while randomizing the center offset of where the crop begins.
Step9: The example code uses tf.image.crop_to_bounding_box in order to crop the image starting at the upper left pixel located at (0, 0). Currently, the function only works with a tensor which has a defined shape so an input image needs to be executed on the graph first.
Padding
Pad an image with zeros in order to make it the same size as an expected image. This can be accomplished using tf.pad but TensorFlow has another function useful for resizing images which are too large or too small. The method will pad an image which is too small including zeros along the edges of the image. Often, this method is used to resize small images because any other method of resizing with distort the image.
Step10: This example code increases the images height by one pixel and its width by a pixel as well. The new pixels are all set to 0. Padding in this manner is useful for scaling up an image which is too small. This can happen if there are images in the training set with a mix of aspect ratios. TensorFlow has a useful shortcut for resizing images which don't match the same aspect ratio using a combination of pad and crop.
Step11: The real_image has been reduced in height to be 2 pixels tall and the width has been increased by padding the image with zeros. This function works based on the center of the image input.
Flipping
Flipping an image is exactly what it sounds like. Each pixel's location is reversed horizontally or vertically. Technically speaking, flopping is the term used when flipping an image vertically. Terms aside, flipping images is useful with TensorFlow to give different perspectives of the same image for training. For example, a picture of an Australian Shepherd with crooked left ear could be flipped in order to allow matching of crooked right ears.
TensorFlow has functions to flip images vertically, horizontally and choose randomly. The ability to randomly flip an image is a useful method to keep from overfitting a model to flipped versions of images.
Step12: This example code flips a subset of the image horizontally and then vertically. The subset is used with tf.slice because the original image flipped returns the same images (for this example only). The subset of pixels illustrates the change which occurs when an image is flipped. tf.image.flip_left_right and tf.image.flip_up_down both operate on tensors which are not limited to images. These will flip an image a single time, randomly flipping an image is done using a separate set of functions.
Step13: This example does the same logic as the example before except that the output is random. Every time this runs, a different output is expected. There is a parameter named seed which may be used to control how random the flipping occurs.
Saturation and Balance
Images which are found on the internet are often edited in advance. For instance, many of the images found in the Stanford Dogs dataset have too much saturation (lots of color). When an edited image is used for training, it may mislead a CNN model into finding patterns which are related to the edited image and not the content in the image.
TensorFlow has useful functions which help in training on images by changing the saturation, hue, contrast and brightness. The functions allow for simple manipulation of these image attributes as well as randomly altering these attributes. The random altering is useful in training in for the same reason randomly flipping an image is useful. The random attribute changes help a CNN be able to accurately match a feature in images which have been edited or were taken under different lighting.
Step14: This example brightens a single pixel, which is primarily red, with a delta of 0.2. Unfortunately, in the current version of TensorFlow 0.8, this method doesn't work well with a tf.uint8 input. It's best to avoid using this when possible and preprocess brightness changes.
Step15: The example code changes the contrast by -0.5 which makes the new version of the image fairly unrecognizable. Adjusting contrast is best done in small increments to keep from blowing out an image. Blowing out an image means the same thing as saturating a neuron, it reached its maximum value and can't be recovered. With contrast changes, an image can become completely white and completely black from the same adjustment.
The tf.slice operation is for brevity, highlighting one of the pixels which has changed. It is not required when running this operation.
Step16: The example code adjusts the hue found in the image to make it more colorful. The adjustment accepts a delta parameter which controls the amount of hue to adjust in the image.
Step17: The code is similar to adjusting the contrast. It is common to oversaturate an image in order to identify edges because the increased saturation highlights changes in colors.
Colors
CNNs are commonly trained using images with a single color. When an image has a single color it is said to use a grayscale colorspace meaning it uses a single channel of colors. For most computer vision related tasks, using grayscale is reasonable because the shape of an image can be seen without all the colors. The reduction in colors equates to a quicker to train image. Instead of a 3 component rank 1 tensor to describe each color found with RGB, a grayscale image requires a single component rank 1 tensor to describe the amount of gray found in the image.
Although grayscale has benefits, it's important to consider applications which require a distinction based on color. Color in images is challenging to work with in most computer vision because it isn't easy to mathematically define the similarity of two RGB colors. In order to use colors in CNN training, it's useful to convert the colorspace the image is natively in.
Grayscale
Grayscale has a single component to it and has the same range of color as RGB <span class="math-tex" data-type="tex">\([0, 255]\)</span>.
Step18: This example converted the RGB image into grayscale. The tf.slice operation took the top row of pixels out to investigate how their color has changed. The grayscale conversion is done by averaging all the color values for a pixel and setting the amount of grayscale to be the average.
HSV
Hue, saturation and value are what make up HSV colorspace. This space is represented with a 3 component rank 1 tensor similar to RGB. HSV is not similar to RGB in what it measures, it's measuring attributes of an image which are closer to human perception of color than RGB. It is sometimes called HSB, where the B stands for brightness.
Step19: RGB
RGB is the colorspace which has been used in all the example code so far. It's broken up into a 3 component rank 1 tensor which includes the amount of red <span class="math-tex" data-type="tex">\([0, 255]\)</span>, green <span class="math-tex" data-type="tex">\([0, 255]\)</span> and blue <span class="math-tex" data-type="tex">\([0, 255]\)</span>. Most images are already in RGB but TensorFlow has builtin functions in case the images are in another colorspace. | Python Code:
red = tf.constant([255, 0, 0])
Explanation: Images and TensorFlow
TensorFlow is designed to support working with images as input to neural networks. TensorFlow supports loading common file formats (JPG, PNG), working in different color spaces (RGB, RGBA) and common image manipulation tasks. TensorFlow makes it easier to work with images but it's still a challenge. The largest challenge working with images are the size of the tensor which is eventually loaded. Every image requires a tensor the same size as the image's <span class="math-tex" data-type="tex">\(height * width * channels\)</span>. As a reminder, channels are represented as a rank 1 tensor including a scalar amount of color in each channel.
A red RGB pixel in TensorFlow would be represented with the following tensor.
End of explanation
# The match_filenames_once will accept a regex but there is no need for this example.
image_filename = "./images/chapter-05-object-recognition-and-classification/working-with-images/test-input-image.jpg"
filename_queue = tf.train.string_input_producer(
tf.train.match_filenames_once(image_filename))
image_reader = tf.WholeFileReader()
_, image_file = image_reader.read(filename_queue)
image = tf.image.decode_jpeg(image_file)
Explanation: Each scalar can be changed to make the pixel another color or a mix of colors. The rank 1 tensor of a pixel is in the format of [red, green, blue] for an RGB color space. All the pixels in an image are stored in files on a disk which need to be read into memory so TensorFlow may operate on them.
Loading images
TensorFlow is designed to make it easy to load files from disk quickly. Loading images is the same as loading any other large binary file until the contents are decoded. Loading this example 3x3 pixel RGB JPG image is done using a similar process to loading any other type of file.
End of explanation
# setup-only-ignore
sess.run(tf.initialize_all_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
sess.run(image)
# setup-only-ignore
filename_queue.close(cancel_pending_enqueues=True)
coord.request_stop()
coord.join(threads)
Explanation: The image, which is assumed to be located in a relative directory from where this code is ran. An input producer (tf.train.string_input_producer) finds the files and adds them to a queue for loading. Loading an image requires loading the entire file into memory (tf.WholeFileReader) and onces a file has been read (image_reader.read) the resulting image is decoded (tf.image.decode_jpeg).
Now the image can be inspected, since there is only one file by that name the queue will always return the same image.
End of explanation
# Reuse the image from earlier and give it a fake label
image_label = b'\x01' # Assume the label data is in a one-hot representation (00000001)
# Convert the tensor into bytes, notice that this will load the entire image file
image_loaded = sess.run(image)
image_bytes = image_loaded.tobytes()
image_height, image_width, image_channels = image_loaded.shape
# Export TFRecord
writer = tf.python_io.TFRecordWriter("./output/training-image.tfrecord")
# Don't store the width, height or image channels in this Example file to save space but not required.
example = tf.train.Example(features=tf.train.Features(feature={
'label': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_label])),
'image': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_bytes]))
}))
# This will save the example to a text file tfrecord
writer.write(example.SerializeToString())
writer.close()
Explanation: Inspect the output from loading an image, notice that it's a fairly simple rank 3 tensor. The RGB values are found in 9 rank 1 tensors. The higher rank of the image should be familiar from earlier sections. The format of the image loaded in memory is now [batch_size, image_height, image_width, channels].
The batch_size in this example is 1 because there are no batching operations happening. Batching of input is covered in the TensorFlow documentation with a great amount of detail. When dealing with images, note the amount of memory required to load the raw images. If the images are too large or too many are loaded in a batch, the system may stop responding.
Image Formats
It's important to consider aspects of images and how they affect a model. Consider what would happen if a network is trained with input from a single frame of a RED Weapon Camera, which at the time of writing this, has an effective pixel count of 6144x3160. That'd be 19,415,040 rank one tensors with 3 dimensions of color information.
Practically speaking, an input of that size will use a huge amount of system memory. Training a CNN takes a large amount of time and loading very large files slow it down more. Even if the increase in time is acceptable, the size a single image would be hard to fit in memory on the majority of system's GPUs.
A large input image is counterproductive to training most CNNs as well. The CNN is attempting to find inherent attributes in an image, which are unique but generalized so that they may be applied to other images with similar results. Using a large input is flooding a network with irrelevant information which will keep from generalizing the model.
In the Stanford Dogs Dataset there are two extremely different images of the same dog breed which should both match as a Pug. Although cute, these images are filled with useless information which mislead a network during training. For example, the hat worn by the Pug in n02110958_4030.jpg isn't a feature a CNN needs to learn in order to match a Pug. Most Pugs prefer pirate hats so the jester hat is training the network to match a hat which most Pugs don't wear.
Highlighting important information in images is done by storing them in an appropriate file format and manipulating them. Different formats can be used to solve different problems encountered while working with images.
JPEG and PNG
TensorFlow has two image formats used to decode image data, one is tf.image.decode_jpeg and the other is tf.image.decode_png. These are common file formats in computer vision applications because they're trivial to convert other formats to.
Something important to keep in mind, JPEG images don't store any alpha channel information and PNG images do. This could be important if what you're training on requires alpha information (transparency). An example usage scenario is one where you've manually cut out some pieces of an image, for example, irrelevant jester hats on dogs. Setting those pieces to black would make them seem of similar importance to other black colored items in the image. Setting the removed hat to have an alpha of 0 would help in distinguishing its removal.
When working with JPEG images, don't manipulate them too much because it'll leave artifacts. Instead, plan to take raw images and export them to JPEG while doing any manipulation needed. Try to manipulate images before loading them whenever possible to save time in training.
PNG images work well if manipulation is required. PNG format is lossless so it'll keep all the information from the original file (unless they've been resized or downsampled). The downside to PNGs is that the files are larger than their JPEG counterpart.
TFRecord
TensorFlow has a built-in file format designed to keep binary data and label (category for training) data in the same file. The format is called TFRecord and the format requires a preprocessing step to convert images to a TFRecord format before training. The largest benefit is keeping each input image in the same file as the label associated with it.
Technically, TFRecord files are protobuf formatted files. They are great for use as a preprocessed format because they aren't compressed and can be loaded into memory quickly. In this example, an image is written to a new TFRecord formatted file and it's label is stored as well.
End of explanation
# Load TFRecord
tf_record_filename_queue = tf.train.string_input_producer(
tf.train.match_filenames_once("./output/training-image.tfrecord"))
# Notice the different record reader, this one is designed to work with TFRecord files which may
# have more than one example in them.
tf_record_reader = tf.TFRecordReader()
_, tf_record_serialized = tf_record_reader.read(tf_record_filename_queue)
# The label and image are stored as bytes but could be stored as int64 or float64 values in a
# serialized tf.Example protobuf.
tf_record_features = tf.parse_single_example(
tf_record_serialized,
features={
'label': tf.FixedLenFeature([], tf.string),
'image': tf.FixedLenFeature([], tf.string),
})
# Using tf.uint8 because all of the channel information is between 0-255
tf_record_image = tf.decode_raw(
tf_record_features['image'], tf.uint8)
# Reshape the image to look like the image saved, not required
tf_record_image = tf.reshape(
tf_record_image,
[image_height, image_width, image_channels])
# Use real values for the height, width and channels of the image because it's required
# to reshape the input.
tf_record_label = tf.cast(tf_record_features['label'], tf.string)
Explanation: The label is in a format known as one-hot encoding which is a common way to work with label data for categorization of multi-class data. The Stanford Dogs Dataset is being treated as multi-class data because the dogs are being categorized as a single breed and not a mix of breeds. In the real world, a multilabel solution would work well to predict dog breeds because it'd be capable of matching a dog with multiple breeds.
In the example code, the image is loaded into memory and converted into an array of bytes. The bytes are then added to the tf.train.Example file which are serialized SerializeToString before storing to disk. Serialization is a way of converting the in memory object into a format safe to be transferred to a file. The serialized example is now saved in a format which can be loaded and deserialized back to the example format saved here.
Now that the image is saved as a TFRecord it can be loaded again but this time from the TFRecord file. This would be the loading required in a training step to load the image and label for training. This will save time from loading the input image and its corresponding label separately.
End of explanation
# setup-only-ignore
sess.close()
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
sess.run(tf.equal(image, tf_record_image))
Explanation: At first, the file is loaded in the same way as any other file. The main difference is that the file is then read using a TFRecordReader. Instead of decoding the image, the TFRecord is parsed tf.parse_single_example and then the image is read as raw bytes (tf.decode_raw).
After the file is loaded, it is reshaped (tf.reshape) in order to keep it in the same layout as tf.nn.conv2d expects it [image_height, image_width, image_channels]. It'd be save to expand the dimensions (tf.expand) in order to add in the batch_size dimension to the input_batch.
In this case a single image is in the TFRecord but these record files support multiple examples being written to them. It'd be safe to have a single TFRecord file which stores an entire training set but splitting up the files doesn't hurt.
The following code is useful to check that the image saved to disk is the same as the image which was loaded from TensorFlow.
End of explanation
# Check that the label is still 0b00000001.
sess.run(tf_record_label)
# setup-only-ignore
tf_record_filename_queue.close(cancel_pending_enqueues=True)
coord.request_stop()
coord.join(threads)
Explanation: All of the attributes of the original image and the image loaded from the TFRecord file are the same. To be sure, load the label from the TFRecord file and check that it is the same as the one saved earlier.
End of explanation
sess.run(tf.image.central_crop(image, 0.1))
Explanation: Creating a file which stores both the raw image data and the expected output label will save complexities during training. It's not required to use TFRecord files but it's highly recommend when working with images. If it doesn't work well for a workflow, it's still recommended to preprocess images and save them before training. Manipulating an image each time it's loaded is not recommended.
Image Manipulation
CNNs work well when they're given a large amount of diverse quality training data. Images capture complex scenes in a way which visually communicates an intended subject. In the Stanford Dog's Dataset, it's important that the images visually highlight the importance of dogs in the picture. A picture with a dog clearly visible in the center is considered more valuable than one with a dog in the background.
Not all datasets have the most valuable images. The following are two images from the Stanford Dogs Dataset which are supposed to highlight dog breeds. The image on the left n02113978_3480.jpg highlights important attributes of a typical Mexican Hairless Dog, while the image on the right n02113978_1030.jpg highlights the look of inebriated party goers scaring a Mexican Hairless Dog. The image on the right n02113978_1030.jpg is filled with irrelevant information which may train a CNN to categorize party goer faces instead of Mexican Hairless Dog breeds. Images like this may still include an image of a dog and could be manipulated to highlight the dog instead of people.
Image manipulation is best done as a preprocessing step in most scenarios. An image can be cropped, resized and the color levels adjusted. On the other hand, there is an important use case for manipulating an image while training. After an image is loaded, it can be flipped or distorted to diversify the input training information used with the network. This step adds further processing time but helps with overfitting.
TensorFlow is not designed as an image manipulation framework. There are libraries available in Python which support more image manipulation than TensorFlow (PIL and OpenCV). For TensorFlow, we'll summarize a few useful image manipulation features available which are useful in training CNNs.
Cropping
Cropping an image will remove certain regions of the image without keeping any information. Cropping is similar to tf.slice where a section of a tensor is cut out from the full tensor. Cropping an input image for a CNN can be useful if there is extra input along a dimension which isn't required. For example, cropping dog pictures where the dog is in the center of the images to reduce the size of the input.
End of explanation
# This crop method only works on real value input.
real_image = sess.run(image)
bounding_crop = tf.image.crop_to_bounding_box(
real_image, offset_height=0, offset_width=0, target_height=2, target_width=1)
sess.run(bounding_crop)
Explanation: The example code uses tf.image.central_crop to crop out 10% of the image and return it. This method always returns based on the center of the image being used.
Cropping is usually done in preprocessing but it can be useful when training if the background is useful. When the background is useful then cropping can be done while randomizing the center offset of where the crop begins.
End of explanation
# This padding method only works on real value input.
real_image = sess.run(image)
pad = tf.image.pad_to_bounding_box(
real_image, offset_height=0, offset_width=0, target_height=4, target_width=4)
sess.run(pad)
Explanation: The example code uses tf.image.crop_to_bounding_box in order to crop the image starting at the upper left pixel located at (0, 0). Currently, the function only works with a tensor which has a defined shape so an input image needs to be executed on the graph first.
Padding
Pad an image with zeros in order to make it the same size as an expected image. This can be accomplished using tf.pad but TensorFlow has another function useful for resizing images which are too large or too small. The method will pad an image which is too small including zeros along the edges of the image. Often, this method is used to resize small images because any other method of resizing with distort the image.
End of explanation
# This padding method only works on real value input.
real_image = sess.run(image)
crop_or_pad = tf.image.resize_image_with_crop_or_pad(
real_image, target_height=2, target_width=5)
sess.run(crop_or_pad)
Explanation: This example code increases the images height by one pixel and its width by a pixel as well. The new pixels are all set to 0. Padding in this manner is useful for scaling up an image which is too small. This can happen if there are images in the training set with a mix of aspect ratios. TensorFlow has a useful shortcut for resizing images which don't match the same aspect ratio using a combination of pad and crop.
End of explanation
top_left_pixels = tf.slice(image, [0, 0, 0], [2, 2, 3])
flip_horizon = tf.image.flip_left_right(top_left_pixels)
flip_vertical = tf.image.flip_up_down(flip_horizon)
sess.run([top_left_pixels, flip_vertical])
Explanation: The real_image has been reduced in height to be 2 pixels tall and the width has been increased by padding the image with zeros. This function works based on the center of the image input.
Flipping
Flipping an image is exactly what it sounds like. Each pixel's location is reversed horizontally or vertically. Technically speaking, flopping is the term used when flipping an image vertically. Terms aside, flipping images is useful with TensorFlow to give different perspectives of the same image for training. For example, a picture of an Australian Shepherd with crooked left ear could be flipped in order to allow matching of crooked right ears.
TensorFlow has functions to flip images vertically, horizontally and choose randomly. The ability to randomly flip an image is a useful method to keep from overfitting a model to flipped versions of images.
End of explanation
top_left_pixels = tf.slice(image, [0, 0, 0], [2, 2, 3])
random_flip_horizon = tf.image.random_flip_left_right(top_left_pixels)
random_flip_vertical = tf.image.random_flip_up_down(random_flip_horizon)
sess.run(random_flip_vertical)
Explanation: This example code flips a subset of the image horizontally and then vertically. The subset is used with tf.slice because the original image flipped returns the same images (for this example only). The subset of pixels illustrates the change which occurs when an image is flipped. tf.image.flip_left_right and tf.image.flip_up_down both operate on tensors which are not limited to images. These will flip an image a single time, randomly flipping an image is done using a separate set of functions.
End of explanation
example_red_pixel = tf.constant([254., 2., 15.])
adjust_brightness = tf.image.adjust_brightness(example_red_pixel, 0.2)
sess.run(adjust_brightness)
Explanation: This example does the same logic as the example before except that the output is random. Every time this runs, a different output is expected. There is a parameter named seed which may be used to control how random the flipping occurs.
Saturation and Balance
Images which are found on the internet are often edited in advance. For instance, many of the images found in the Stanford Dogs dataset have too much saturation (lots of color). When an edited image is used for training, it may mislead a CNN model into finding patterns which are related to the edited image and not the content in the image.
TensorFlow has useful functions which help in training on images by changing the saturation, hue, contrast and brightness. The functions allow for simple manipulation of these image attributes as well as randomly altering these attributes. The random altering is useful in training in for the same reason randomly flipping an image is useful. The random attribute changes help a CNN be able to accurately match a feature in images which have been edited or were taken under different lighting.
End of explanation
adjust_contrast = tf.image.adjust_contrast(image, -.5)
sess.run(tf.slice(adjust_contrast, [1, 0, 0], [1, 3, 3]))
Explanation: This example brightens a single pixel, which is primarily red, with a delta of 0.2. Unfortunately, in the current version of TensorFlow 0.8, this method doesn't work well with a tf.uint8 input. It's best to avoid using this when possible and preprocess brightness changes.
End of explanation
adjust_hue = tf.image.adjust_hue(image, 0.7)
sess.run(tf.slice(adjust_hue, [1, 0, 0], [1, 3, 3]))
Explanation: The example code changes the contrast by -0.5 which makes the new version of the image fairly unrecognizable. Adjusting contrast is best done in small increments to keep from blowing out an image. Blowing out an image means the same thing as saturating a neuron, it reached its maximum value and can't be recovered. With contrast changes, an image can become completely white and completely black from the same adjustment.
The tf.slice operation is for brevity, highlighting one of the pixels which has changed. It is not required when running this operation.
End of explanation
adjust_saturation = tf.image.adjust_saturation(image, 0.4)
sess.run(tf.slice(adjust_saturation, [1, 0, 0], [1, 3, 3]))
Explanation: The example code adjusts the hue found in the image to make it more colorful. The adjustment accepts a delta parameter which controls the amount of hue to adjust in the image.
End of explanation
gray = tf.image.rgb_to_grayscale(image)
sess.run(tf.slice(gray, [0, 0, 0], [1, 3, 1]))
Explanation: The code is similar to adjusting the contrast. It is common to oversaturate an image in order to identify edges because the increased saturation highlights changes in colors.
Colors
CNNs are commonly trained using images with a single color. When an image has a single color it is said to use a grayscale colorspace meaning it uses a single channel of colors. For most computer vision related tasks, using grayscale is reasonable because the shape of an image can be seen without all the colors. The reduction in colors equates to a quicker to train image. Instead of a 3 component rank 1 tensor to describe each color found with RGB, a grayscale image requires a single component rank 1 tensor to describe the amount of gray found in the image.
Although grayscale has benefits, it's important to consider applications which require a distinction based on color. Color in images is challenging to work with in most computer vision because it isn't easy to mathematically define the similarity of two RGB colors. In order to use colors in CNN training, it's useful to convert the colorspace the image is natively in.
Grayscale
Grayscale has a single component to it and has the same range of color as RGB <span class="math-tex" data-type="tex">\([0, 255]\)</span>.
End of explanation
hsv = tf.image.rgb_to_hsv(tf.image.convert_image_dtype(image, tf.float32))
sess.run(tf.slice(hsv, [0, 0, 0], [3, 3, 3]))
Explanation: This example converted the RGB image into grayscale. The tf.slice operation took the top row of pixels out to investigate how their color has changed. The grayscale conversion is done by averaging all the color values for a pixel and setting the amount of grayscale to be the average.
HSV
Hue, saturation and value are what make up HSV colorspace. This space is represented with a 3 component rank 1 tensor similar to RGB. HSV is not similar to RGB in what it measures, it's measuring attributes of an image which are closer to human perception of color than RGB. It is sometimes called HSB, where the B stands for brightness.
End of explanation
rgb_hsv = tf.image.hsv_to_rgb(hsv)
rgb_grayscale = tf.image.grayscale_to_rgb(gray)
Explanation: RGB
RGB is the colorspace which has been used in all the example code so far. It's broken up into a 3 component rank 1 tensor which includes the amount of red <span class="math-tex" data-type="tex">\([0, 255]\)</span>, green <span class="math-tex" data-type="tex">\([0, 255]\)</span> and blue <span class="math-tex" data-type="tex">\([0, 255]\)</span>. Most images are already in RGB but TensorFlow has builtin functions in case the images are in another colorspace.
End of explanation |
8,484 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hand tuning hyperparameters
Learning Objectives
Step1: Next, we'll load our data set.
Step2: Examine the data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column.
This will include things like mean, standard deviation, max, min, and various quantiles.
Step3: In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). Can we use total_rooms as our input feature? What's going on with the values for that feature?
This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well
Step4: Build the first model
In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). We'll use num_rooms as our input feature.
To train our model, we'll use the LinearRegressor estimator. The Estimator takes care of a lot of the plumbing, and exposes a convenient way to interact with data, training, and evaluation.
Step5: 1. Scale the output
Let's scale the target values so that the default parameters are more appropriate.
Step6: 2. Change learning rate and batch size
Can you come up with better parameters? | Python Code:
import math
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
print(tf.__version__)
tf.logging.set_verbosity(tf.logging.INFO)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
Explanation: Hand tuning hyperparameters
Learning Objectives:
* Use the LinearRegressor class in TensorFlow to predict median housing price, at the granularity of city blocks, based on one input feature
* Evaluate the accuracy of a model's predictions using Root Mean Squared Error (RMSE)
* Improve the accuracy of a model by hand-tuning its hyperparameters
The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Using only one input feature -- the number of rooms -- predict house value.
Set Up
In this first cell, we'll load the necessary libraries.
End of explanation
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
Explanation: Next, we'll load our data set.
End of explanation
df.head()
df.describe()
Explanation: Examine the data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column.
This will include things like mean, standard deviation, max, min, and various quantiles.
End of explanation
df['num_rooms'] = df['total_rooms'] / df['households']
df.describe()
# Split into train and eval
np.random.seed(seed=1) #makes split reproducible
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
Explanation: In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). Can we use total_rooms as our input feature? What's going on with the values for that feature?
This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well
End of explanation
OUTDIR = './housing_trained'
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = [tf.feature_column.numeric_column('num_rooms')])
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels, pred_values)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]],
y = traindf["median_house_value"], # note the scaling
num_epochs = None,
shuffle = True),
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]],
y = evaldf["median_house_value"], # note the scaling
num_epochs = 1,
shuffle = False),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 100)
Explanation: Build the first model
In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). We'll use num_rooms as our input feature.
To train our model, we'll use the LinearRegressor estimator. The Estimator takes care of a lot of the plumbing, and exposes a convenient way to interact with data, training, and evaluation.
End of explanation
SCALE = 100000
OUTDIR = './housing_trained'
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = [tf.feature_column.numeric_column('num_rooms')])
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]],
y = traindf["median_house_value"] / SCALE, # note the scaling
num_epochs = None,
shuffle = True),
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]],
y = evaldf["median_house_value"] / SCALE, # note the scaling
num_epochs = 1,
shuffle = False),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 100)
Explanation: 1. Scale the output
Let's scale the target values so that the default parameters are more appropriate.
End of explanation
SCALE = 100000
OUTDIR = './housing_trained'
def train_and_evaluate(output_dir, num_train_steps):
myopt = tf.train.FtrlOptimizer(learning_rate = 0.2) # note the learning rate
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = [tf.feature_column.numeric_column('num_rooms')],
optimizer = myopt)
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]],
y = traindf["median_house_value"] / SCALE, # note the scaling
num_epochs = None,
batch_size = 512, # note the batch size
shuffle = True),
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]],
y = evaldf["median_house_value"] / SCALE, # note the scaling
num_epochs = 1,
shuffle = False),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 100)
Explanation: 2. Change learning rate and batch size
Can you come up with better parameters?
End of explanation |
8,485 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convert date time type to seperate the train and test set. becasue the test set data time have to be come later than the train set
Step1: pick random 10000 users row as our train data set
Step2: Simple predication | Python Code:
train["date_time"] = pd.to_datetime(train["date_time"])
train["year"] = train["date_time"].dt.year
train["month"] = train["date_time"].dt.month
Explanation: Convert date time type to seperate the train and test set. becasue the test set data time have to be come later than the train set
End of explanation
import random
unique_users = train.user_id.unique()
sel_user_ids = [unique_users[i] for i in sorted(random.sample(range(len(unique_users)), 10000)) ]
sel_train = train[train.user_id.isin(sel_user_ids)]
t1 = sel_train[((sel_train.year == 2013) | ((sel_train.year == 2014) & (sel_train.month < 8)))]
t2 = sel_train[((sel_train.year == 2014) & (sel_train.month >= 8))]
# remove the empty bookinf in test set
t2 = t2[t2.is_booking == True]
Explanation: pick random 10000 users row as our train data set
End of explanation
t2[:10]
most_common_clusters = list(train.hotel_cluster.value_counts().head().index)
predictions = [most_common_clusters for i in range(t2.shape[0])]
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
dest_small = pca.fit_transform(destinations[["d{0}".format(i + 1) for i in range(149)]])
dest_small = pd.DataFrame(dest_small)
dest_small["srch_destination_id"] = destinations["srch_destination_id"]
Explanation: Simple predication: use the most 5 common cluster as predication for each data in test
End of explanation |
8,486 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration Exercise 1
Imports
Step1: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral
Step2: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
Explanation: Integration Exercise 1
Imports
End of explanation
def trapz(f, a, b, N):
h = (b-a)/N
k = np.arange(1,N)
I = h*(0.5*f(a) + 0.5*f(b) + f(a+k*h).sum())
return I
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
Explanation: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral:
$$ I(a,b) = \int_a^b f(x) dx $$
by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$:
$$ h = (b-a)/N $$
Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points.
Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points).
End of explanation
integrate.quad(f,0,1)
trapz(f, 0, 1, 1000)
integrate.quad(g,0,np.pi)
trapz(g, 0, np.pi, 1000)
assert True # leave this cell to grade the previous one
Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
End of explanation |
8,487 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Hub Authors.
Step1: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: You will use the AdamW optimizer from tensorflow/models.
Step3: Sentiment analysis
This notebook trains a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review.
You'll use the Large Movie Review Dataset that contains the text of 50,000 movie reviews from the Internet Movie Database.
Download the IMDB dataset
Let's download and extract the dataset, then explore the directory structure.
Step4: Next, you will use the text_dataset_from_directory utility to create a labeled tf.data.Dataset.
The IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80
Step5: Let's take a look at a few reviews.
Step6: Loading models from TensorFlow Hub
Here you can choose which BERT model you will load from TensorFlow Hub and fine-tune. There are multiple BERT models available.
BERT-Base, Uncased and seven more models with trained weights released by the original BERT authors.
Small BERTs have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality.
ALBERT
Step7: The preprocessing model
Text inputs need to be transformed to numeric token ids and arranged in several Tensors before being input to BERT. TensorFlow Hub provides a matching preprocessing model for each of the BERT models discussed above, which implements this transformation using TF ops from the TF.text library. It is not necessary to run pure Python code outside your TensorFlow model to preprocess text.
The preprocessing model must be the one referenced by the documentation of the BERT model, which you can read at the URL printed above. For BERT models from the drop-down above, the preprocessing model is selected automatically.
Note
Step8: Let's try the preprocessing model on some text and see the output
Step9: As you can see, now you have the 3 outputs from the preprocessing that a BERT model would use (input_words_id, input_mask and input_type_ids).
Some other important points
Step10: The BERT models return a map with 3 important keys
Step11: Let's check that the model runs with the output of the preprocessing model.
Step12: The output is meaningless, of course, because the model has not been trained yet.
Let's take a look at the model's structure.
Step13: Model training
You now have all the pieces to train a model, including the preprocessing module, BERT encoder, data, and classifier.
Loss function
Since this is a binary classification problem and the model outputs a probability (a single-unit layer), you'll use losses.BinaryCrossentropy loss function.
Step14: Optimizer
For fine-tuning, let's use the same optimizer that BERT was originally trained with
Step15: Loading the BERT model and training
Using the classifier_model you created earlier, you can compile the model with the loss, metric and optimizer.
Step16: Note
Step17: Evaluate the model
Let's see how the model performs. Two values will be returned. Loss (a number which represents the error, lower values are better), and accuracy.
Step18: Plot the accuracy and loss over time
Based on the History object returned by model.fit(). You can plot the training and validation loss for comparison, as well as the training and validation accuracy
Step19: In this plot, the red lines represent the training loss and accuracy, and the blue lines are the validation loss and accuracy.
Export for inference
Now you just save your fine-tuned model for later use.
Step20: Let's reload the model, so you can try it side by side with the model that is still in memory.
Step21: Here you can test your model on any sentence you want, just add to the examples variable below.
Step22: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. In Python, you can test them as follows | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Hub Authors.
End of explanation
# A dependency of the preprocessing for BERT inputs
!pip install -q -U tensorflow-text
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/classify_text_with_bert"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/classify_text_with_bert.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/classify_text_with_bert.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/classify_text_with_bert.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/collections/bert/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Classify text with BERT
This tutorial contains complete code to fine-tune BERT to perform sentiment analysis on a dataset of plain-text IMDB movie reviews.
In addition to training a model, you will learn how to preprocess text into an appropriate format.
In this notebook, you will:
Load the IMDB dataset
Load a BERT model from TensorFlow Hub
Build your own model by combining BERT with a classifier
Train your own model, fine-tuning BERT as part of that
Save your model and use it to classify sentences
If you're new to working with the IMDB dataset, please see Basic text classification for more details.
About BERT
BERT and other Transformer encoder architectures have been wildly successful on a variety of tasks in NLP (natural language processing). They compute vector-space representations of natural language that are suitable for use in deep learning models. The BERT family of models uses the Transformer encoder architecture to process each token of input text in the full context of all tokens before and after, hence the name: Bidirectional Encoder Representations from Transformers.
BERT models are usually pre-trained on a large corpus of text, then fine-tuned for specific tasks.
Setup
End of explanation
!pip install -q tf-models-official
import os
import shutil
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
from official.nlp import optimization # to create AdamW optimizer
import matplotlib.pyplot as plt
tf.get_logger().setLevel('ERROR')
Explanation: You will use the AdamW optimizer from tensorflow/models.
End of explanation
url = 'https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
dataset = tf.keras.utils.get_file('aclImdb_v1.tar.gz', url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
train_dir = os.path.join(dataset_dir, 'train')
# remove unused folders to make it easier to load the data
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
Explanation: Sentiment analysis
This notebook trains a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review.
You'll use the Large Movie Review Dataset that contains the text of 50,000 movie reviews from the Internet Movie Database.
Download the IMDB dataset
Let's download and extract the dataset, then explore the directory structure.
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
batch_size = 32
seed = 42
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
class_names = raw_train_ds.class_names
train_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
test_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/test',
batch_size=batch_size)
test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)
Explanation: Next, you will use the text_dataset_from_directory utility to create a labeled tf.data.Dataset.
The IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80:20 split of the training data by using the validation_split argument below.
Note: When using the validation_split and subset arguments, make sure to either specify a random seed, or to pass shuffle=False, so that the validation and training splits have no overlap.
End of explanation
for text_batch, label_batch in train_ds.take(1):
for i in range(3):
print(f'Review: {text_batch.numpy()[i]}')
label = label_batch.numpy()[i]
print(f'Label : {label} ({class_names[label]})')
Explanation: Let's take a look at a few reviews.
End of explanation
#@title Choose a BERT model to fine-tune
bert_model_name = 'small_bert/bert_en_uncased_L-4_H-512_A-8' #@param ["bert_en_uncased_L-12_H-768_A-12", "bert_en_cased_L-12_H-768_A-12", "bert_multi_cased_L-12_H-768_A-12", "small_bert/bert_en_uncased_L-2_H-128_A-2", "small_bert/bert_en_uncased_L-2_H-256_A-4", "small_bert/bert_en_uncased_L-2_H-512_A-8", "small_bert/bert_en_uncased_L-2_H-768_A-12", "small_bert/bert_en_uncased_L-4_H-128_A-2", "small_bert/bert_en_uncased_L-4_H-256_A-4", "small_bert/bert_en_uncased_L-4_H-512_A-8", "small_bert/bert_en_uncased_L-4_H-768_A-12", "small_bert/bert_en_uncased_L-6_H-128_A-2", "small_bert/bert_en_uncased_L-6_H-256_A-4", "small_bert/bert_en_uncased_L-6_H-512_A-8", "small_bert/bert_en_uncased_L-6_H-768_A-12", "small_bert/bert_en_uncased_L-8_H-128_A-2", "small_bert/bert_en_uncased_L-8_H-256_A-4", "small_bert/bert_en_uncased_L-8_H-512_A-8", "small_bert/bert_en_uncased_L-8_H-768_A-12", "small_bert/bert_en_uncased_L-10_H-128_A-2", "small_bert/bert_en_uncased_L-10_H-256_A-4", "small_bert/bert_en_uncased_L-10_H-512_A-8", "small_bert/bert_en_uncased_L-10_H-768_A-12", "small_bert/bert_en_uncased_L-12_H-128_A-2", "small_bert/bert_en_uncased_L-12_H-256_A-4", "small_bert/bert_en_uncased_L-12_H-512_A-8", "small_bert/bert_en_uncased_L-12_H-768_A-12", "albert_en_base", "electra_small", "electra_base", "experts_pubmed", "experts_wiki_books", "talking-heads_base"]
map_name_to_handle = {
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_base/2',
'electra_small':
'https://tfhub.dev/google/electra_small/2',
'electra_base':
'https://tfhub.dev/google/electra_base/2',
'experts_pubmed':
'https://tfhub.dev/google/experts/bert/pubmed/2',
'experts_wiki_books':
'https://tfhub.dev/google/experts/bert/wiki_books/2',
'talking-heads_base':
'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1',
}
map_model_to_preprocess = {
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'electra_small':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'electra_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_pubmed':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_wiki_books':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'talking-heads_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
}
tfhub_handle_encoder = map_name_to_handle[bert_model_name]
tfhub_handle_preprocess = map_model_to_preprocess[bert_model_name]
print(f'BERT model selected : {tfhub_handle_encoder}')
print(f'Preprocess model auto-selected: {tfhub_handle_preprocess}')
Explanation: Loading models from TensorFlow Hub
Here you can choose which BERT model you will load from TensorFlow Hub and fine-tune. There are multiple BERT models available.
BERT-Base, Uncased and seven more models with trained weights released by the original BERT authors.
Small BERTs have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality.
ALBERT: four different sizes of "A Lite BERT" that reduces model size (but not computation time) by sharing parameters between layers.
BERT Experts: eight models that all have the BERT-base architecture but offer a choice between different pre-training domains, to align more closely with the target task.
Electra has the same architecture as BERT (in three different sizes), but gets pre-trained as a discriminator in a set-up that resembles a Generative Adversarial Network (GAN).
BERT with Talking-Heads Attention and Gated GELU [base, large] has two improvements to the core of the Transformer architecture.
The model documentation on TensorFlow Hub has more details and references to the
research literature. Follow the links above, or click on the tfhub.dev URL
printed after the next cell execution.
The suggestion is to start with a Small BERT (with fewer parameters) since they are faster to fine-tune. If you like a small model but with higher accuracy, ALBERT might be your next option. If you want even better accuracy, choose
one of the classic BERT sizes or their recent refinements like Electra, Talking Heads, or a BERT Expert.
Aside from the models available below, there are multiple versions of the models that are larger and can yield even better accuracy, but they are too big to be fine-tuned on a single GPU. You will be able to do that on the Solve GLUE tasks using BERT on a TPU colab.
You'll see in the code below that switching the tfhub.dev URL is enough to try any of these models, because all the differences between them are encapsulated in the SavedModels from TF Hub.
End of explanation
bert_preprocess_model = hub.KerasLayer(tfhub_handle_preprocess)
Explanation: The preprocessing model
Text inputs need to be transformed to numeric token ids and arranged in several Tensors before being input to BERT. TensorFlow Hub provides a matching preprocessing model for each of the BERT models discussed above, which implements this transformation using TF ops from the TF.text library. It is not necessary to run pure Python code outside your TensorFlow model to preprocess text.
The preprocessing model must be the one referenced by the documentation of the BERT model, which you can read at the URL printed above. For BERT models from the drop-down above, the preprocessing model is selected automatically.
Note: You will load the preprocessing model into a hub.KerasLayer to compose your fine-tuned model. This is the preferred API to load a TF2-style SavedModel from TF Hub into a Keras model.
End of explanation
text_test = ['this is such an amazing movie!']
text_preprocessed = bert_preprocess_model(text_test)
print(f'Keys : {list(text_preprocessed.keys())}')
print(f'Shape : {text_preprocessed["input_word_ids"].shape}')
print(f'Word Ids : {text_preprocessed["input_word_ids"][0, :12]}')
print(f'Input Mask : {text_preprocessed["input_mask"][0, :12]}')
print(f'Type Ids : {text_preprocessed["input_type_ids"][0, :12]}')
Explanation: Let's try the preprocessing model on some text and see the output:
End of explanation
bert_model = hub.KerasLayer(tfhub_handle_encoder)
bert_results = bert_model(text_preprocessed)
print(f'Loaded BERT: {tfhub_handle_encoder}')
print(f'Pooled Outputs Shape:{bert_results["pooled_output"].shape}')
print(f'Pooled Outputs Values:{bert_results["pooled_output"][0, :12]}')
print(f'Sequence Outputs Shape:{bert_results["sequence_output"].shape}')
print(f'Sequence Outputs Values:{bert_results["sequence_output"][0, :12]}')
Explanation: As you can see, now you have the 3 outputs from the preprocessing that a BERT model would use (input_words_id, input_mask and input_type_ids).
Some other important points:
- The input is truncated to 128 tokens. The number of tokens can be customized, and you can see more details on the Solve GLUE tasks using BERT on a TPU colab.
- The input_type_ids only have one value (0) because this is a single sentence input. For a multiple sentence input, it would have one number for each input.
Since this text preprocessor is a TensorFlow model, It can be included in your model directly.
Using the BERT model
Before putting BERT into your own model, let's take a look at its outputs. You will load it from TF Hub and see the returned values.
End of explanation
def build_classifier_model():
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
preprocessing_layer = hub.KerasLayer(tfhub_handle_preprocess, name='preprocessing')
encoder_inputs = preprocessing_layer(text_input)
encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='BERT_encoder')
outputs = encoder(encoder_inputs)
net = outputs['pooled_output']
net = tf.keras.layers.Dropout(0.1)(net)
net = tf.keras.layers.Dense(1, activation=None, name='classifier')(net)
return tf.keras.Model(text_input, net)
Explanation: The BERT models return a map with 3 important keys: pooled_output, sequence_output, encoder_outputs:
pooled_output represents each input sequence as a whole. The shape is [batch_size, H]. You can think of this as an embedding for the entire movie review.
sequence_output represents each input token in the context. The shape is [batch_size, seq_length, H]. You can think of this as a contextual embedding for every token in the movie review.
encoder_outputs are the intermediate activations of the L Transformer blocks. outputs["encoder_outputs"][i] is a Tensor of shape [batch_size, seq_length, 1024] with the outputs of the i-th Transformer block, for 0 <= i < L. The last value of the list is equal to sequence_output.
For the fine-tuning you are going to use the pooled_output array.
Define your model
You will create a very simple fine-tuned model, with the preprocessing model, the selected BERT model, one Dense and a Dropout layer.
Note: for more information about the base model's input and output you can follow the model's URL for documentation. Here specifically, you don't need to worry about it because the preprocessing model will take care of that for you.
End of explanation
classifier_model = build_classifier_model()
bert_raw_result = classifier_model(tf.constant(text_test))
print(tf.sigmoid(bert_raw_result))
Explanation: Let's check that the model runs with the output of the preprocessing model.
End of explanation
tf.keras.utils.plot_model(classifier_model)
Explanation: The output is meaningless, of course, because the model has not been trained yet.
Let's take a look at the model's structure.
End of explanation
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
metrics = tf.metrics.BinaryAccuracy()
Explanation: Model training
You now have all the pieces to train a model, including the preprocessing module, BERT encoder, data, and classifier.
Loss function
Since this is a binary classification problem and the model outputs a probability (a single-unit layer), you'll use losses.BinaryCrossentropy loss function.
End of explanation
epochs = 5
steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = int(0.1*num_train_steps)
init_lr = 3e-5
optimizer = optimization.create_optimizer(init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw')
Explanation: Optimizer
For fine-tuning, let's use the same optimizer that BERT was originally trained with: the "Adaptive Moments" (Adam). This optimizer minimizes the prediction loss and does regularization by weight decay (not using moments), which is also known as AdamW.
For the learning rate (init_lr), you will use the same schedule as BERT pre-training: linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps (num_warmup_steps). In line with the BERT paper, the initial learning rate is smaller for fine-tuning (best of 5e-5, 3e-5, 2e-5).
End of explanation
classifier_model.compile(optimizer=optimizer,
loss=loss,
metrics=metrics)
Explanation: Loading the BERT model and training
Using the classifier_model you created earlier, you can compile the model with the loss, metric and optimizer.
End of explanation
print(f'Training model with {tfhub_handle_encoder}')
history = classifier_model.fit(x=train_ds,
validation_data=val_ds,
epochs=epochs)
Explanation: Note: training time will vary depending on the complexity of the BERT model you have selected.
End of explanation
loss, accuracy = classifier_model.evaluate(test_ds)
print(f'Loss: {loss}')
print(f'Accuracy: {accuracy}')
Explanation: Evaluate the model
Let's see how the model performs. Two values will be returned. Loss (a number which represents the error, lower values are better), and accuracy.
End of explanation
history_dict = history.history
print(history_dict.keys())
acc = history_dict['binary_accuracy']
val_acc = history_dict['val_binary_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
fig = plt.figure(figsize=(10, 6))
fig.tight_layout()
plt.subplot(2, 1, 1)
# r is for "solid red line"
plt.plot(epochs, loss, 'r', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
# plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(epochs, acc, 'r', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
Explanation: Plot the accuracy and loss over time
Based on the History object returned by model.fit(). You can plot the training and validation loss for comparison, as well as the training and validation accuracy:
End of explanation
dataset_name = 'imdb'
saved_model_path = './{}_bert'.format(dataset_name.replace('/', '_'))
classifier_model.save(saved_model_path, include_optimizer=False)
Explanation: In this plot, the red lines represent the training loss and accuracy, and the blue lines are the validation loss and accuracy.
Export for inference
Now you just save your fine-tuned model for later use.
End of explanation
reloaded_model = tf.saved_model.load(saved_model_path)
Explanation: Let's reload the model, so you can try it side by side with the model that is still in memory.
End of explanation
def print_my_examples(inputs, results):
result_for_printing = \
[f'input: {inputs[i]:<30} : score: {results[i][0]:.6f}'
for i in range(len(inputs))]
print(*result_for_printing, sep='\n')
print()
examples = [
'this is such an amazing movie!', # this is the same sentence tried earlier
'The movie was great!',
'The movie was meh.',
'The movie was okish.',
'The movie was terrible...'
]
reloaded_results = tf.sigmoid(reloaded_model(tf.constant(examples)))
original_results = tf.sigmoid(classifier_model(tf.constant(examples)))
print('Results from the saved model:')
print_my_examples(examples, reloaded_results)
print('Results from the model in memory:')
print_my_examples(examples, original_results)
Explanation: Here you can test your model on any sentence you want, just add to the examples variable below.
End of explanation
serving_results = reloaded_model \
.signatures['serving_default'](tf.constant(examples))
serving_results = tf.sigmoid(serving_results['classifier'])
print_my_examples(examples, serving_results)
Explanation: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. In Python, you can test them as follows:
End of explanation |
8,488 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
Read the female respondent file.
Step1: Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household.
Step2: Display the PMF.
Step4: Define <tt>BiasPmf</tt>.
Step5: Make a the biased Pmf of children in the household, as observed if you surveyed the children instead of the respondents.
Step6: Display the actual Pmf and the biased Pmf on the same axes.
Step7: Compute the means of the two Pmfs. | Python Code:
%matplotlib inline
import thinkstats2
import thinkplot
import chap01soln
resp = chap01soln.ReadFemResp()
print len(resp)
Explanation: Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
Read the female respondent file.
End of explanation
numkdhh = thinkstats2.Pmf(resp.numkdhh)
numkdhh
Explanation: Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household.
End of explanation
thinkplot.Hist(numkdhh, label='actual')
thinkplot.Config(title="PMF of num children under 18",
xlabel="number of children under 18",
ylabel="probability")
Explanation: Display the PMF.
End of explanation
def BiasPmf(pmf, label=''):
Returns the Pmf with oversampling proportional to value.
If pmf is the distribution of true values, the result is the
distribution that would be seen if values are oversampled in
proportion to their values; for example, if you ask students
how big their classes are, large classes are oversampled in
proportion to their size.
Args:
pmf: Pmf object.
label: string label for the new Pmf.
Returns:
Pmf object
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf
Explanation: Define <tt>BiasPmf</tt>.
End of explanation
biased_pmf = BiasPmf(numkdhh, label='biased')
thinkplot.Hist(biased_pmf)
thinkplot.Config(title="PMF of num children under 18",
xlabel="number of children under 18",
ylabel="probability")
Explanation: Make a the biased Pmf of children in the household, as observed if you surveyed the children instead of the respondents.
End of explanation
width = 0.45
thinkplot.PrePlot(2)
thinkplot.Hist(biased_pmf, align="right", label="biased", width=width)
thinkplot.Hist(numkdhh, align="left", label="actual", width=width)
thinkplot.Config(title="PMFs of children under 18 in a household",
xlabel='number of children',
ylabel='probability')
Explanation: Display the actual Pmf and the biased Pmf on the same axes.
End of explanation
print "actual mean:", numkdhh.Mean()
print "biased mean:", biased_pmf.Mean()
Explanation: Compute the means of the two Pmfs.
End of explanation |
8,489 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="top"></a>
Db2 JSON Features
There are a number of routines are that are built-in to Db2 that are used to manipulate JSON
documents. These routines are not externalized in the documentation because they were originally used by the
internal API's of Db2 for managing the MongoDB interface. These routines have now been made available for use in both Db2 10.5 and 11.1.
Step1: Table of Contents
Db2 JSON Functions and Registration
Path Requirements
<p>
* [Programming with Db2 JSON](#programming)
* [Inserting JSON Document (JSON2BSON)](#json2bson)
* [Retrieving a JSON Document (BSON2JSON)](#bson2json)
* [Checking a JSON Document for Consistency](#bsonvalidate)
<p>
* [Manipulating JSON Documents](#manipulating)
* [Retrieving Data](#retrieving)
* [Retrieving Individual Fields](#atomic)
* [Accessing Elements in an Array](#array)
* [Accessing Portions of a Structure](#structures)
* [Determining if a Field is Null](#nulls)
* [How to Join JSON Fields](#joins)
<p>
* [JSON Data Types](#datatypes)
* [INTEGERS and BIGINT](#integers)
* [NUMBERS and FLOATING POINT](#numbers)
* [BOOLEAN Values](#boolean)
* [DATE, TIME, and TIMESTAMPS](#date)
* [Strings](#strings)
<p>
* [Dealing with Arrays](#arrays)
* [Retrieving Array Values as a Table](#table)
* [Determining the Size of an Array](#length)
* [Searching an Array for a Value](#getpos)
<p>
* [Updating JSON Documents](#updating)
* [Indexing Fields in JSON Documents](#indexing)
* [How to Simplify JSON Inserts and Retrieval](#simple)
[Back to Top](#top)
<a id='json'></a>
# Db2 JSON Functions
There is one built-in Db2 JSON function and a number of other functions that must be registered within
Db2 before they can be used. The names of the functions and their purpose are described below.
- **JSON_VAL** - Extracts data from a JSON document into SQL data types
- **JSON_TABLE** - Returns a table of values for a document that has array types in it
- **JSON_TYPE** - Returns documents that have a field with a specific data type (like array, or date)
- **JSON_LEN** - Returns the count of elements in an array type inside a document
- **BSON2JSON** - Convert BSON formatted document into JSON strings
- **JSON2BSON** - Convert JSON strings into a BSON document format
- **JSON_GET_POS_ARR_INDEX** - Retrieve the index of a value within an array type in a document
- **JSON_UPDATE** - Update a particular field or document using set syntax
- **BSON_VALIDATE** - Checks to make sure that a BSON field in a BLOB object is in a correct format
Aside from the JSON_VAL function, all other functions in this list must be catalogued before first
being used. The next set of SQL will catalog all of these functions for you. Note
Step2: Back to Top
<a id='path'></a>
Path Statement Requirements
All of the Db2 JSON functions have been placed into the SYSTOOLS schema. This means that in order to execute
any of these commands, you must prefix the command with SYSTOOLS, as in SYSTOOLS.JSON2BSON. In order to
remove this requirement, you must update the CURRENT PATH value to include SYSTOOLS as part of it. The
SQL below will tell you what the current PATH is.
Step3: If SYSTOOLS is not part of the path, you can update it with the following SQL.
Step4: From this point on you won't need to added the SYSTOOLS schema on the front of any of your SQL
statements that refer to these Db2 JSON functions.
Back to Top
<a id='programming'></a>
Programming with the JSON SQL Functions
The functions that are listed below give you the ability to retrieve and manipulate JSON documents that
you store in a column within a table. What these functions do not do is let you explore the structure of a
JSON document. The assumption is that you are storing "known" JSON documents within the column and that
you have some knowledge of the underlying structure.
What this means is that none of the functions listed below will let you determine the fields that are
found within the document. You must already know what these fields and their structure (i.e. is it an array)
are and that you are either trying to extract some of these fields, or need to modify a field within the document.
If you need to determine the structure of the JSON document, you are better off using the JAVA APIs that are available for
manipulating these types of documents.
To store and retrieve an entire document from a column in a table, you would use
Step5: Back to Top
<a id='json2bson'></a>
JSON2BSON
Step6: This is an example of a poorly formatted JSON document.
Step7: Back to Top
<a id='bson2json'></a>
BSON2JSON
Step8: If you want to extract the entire contents of a JSON field, you need to use the BSON2JSON function.
Step9: One thing that you should note is that the JSON that is retrieved has been modified slightly so that
all of the values have quotes around them to avoid any ambiguity. Note that we didn't necessarily require them
when we input the data. For instance, our original JSON document what was inserted looked like this
Step10: The following SQL will inject a bad value into the beginning of the JSON field to test the results from the
BSON_VALIDATE funtion.
Step11: The BSON_VALIDATE should return a zero for this particular row since it is not a valid BSON document.
Step12: Back to Top
<a id='manipulating'></a>
Manipulating JSON Documents
The last section described how we can insert and retrieve entire JSON documents from a column in a table. This
section will explore a number of functions that allow access to individual fields within the JSON document. These
functions are
Step13: We can check the count of records to make sure that 42 employees were added to our table.
Step14: Additional DEPARTMENT Table
In addition to the JSON_EMP table, the following SQL will generate a traditional table called JSON_DEPT
that can be used to determine the name of the department an individual works in.
Step15: Back to Top
<a id='retrieving'></a>
Retrieving Data from a BSON Document
Now that we have inserted some JSON data into a table, this section will explore
the use of the JSON_VAL function to retrieve individual fields from the documents.
This built-in function will return a value from a document in a format that you specify.
The ability to dynamically change the returned data type is extremely important when we
examine index creation in another section.
The JSON_VAL function has the format
Step16: If the size of the field being returned is larger that the field specification,
you will get a NULL value returned, not a truncated value.
Step17: In the case of character fields, you may need to specify a larger return
size and then truncate it to get a subset of the data.
Step18: Back to Top
<a id='array'></a>
Retrieving Array Values
Selecting data from an array type will always give you the first value (element zero).
The employees all have extension numbers but some of them have more than one.
Some of the extensions start with a zero so since the column is being treated as an
integer you will get only 3 digits. It's probably better to define it as a character
string rather than a number! Note that
Step19: If you specify "
Step20: If you need to access a specific array element in a field, you can use the "dot"
notation after the field name. The first element starts at zero. If we select
the 2nd element (.1) all the employees that have a second extension will have a
value retrieved while the ones who don't will have a null value.
Back to Top
<a id='structures'></a>
Retrieving Structured Fields
Structured fields are retrieved using the same dot notation as arrays.
The field is specified by using the "field.subfield" format and these fields can be
an arbitrary number of levels deep.
The pay field in the employee record is made up of three additional fields.
Python
"pay"
Step21: If you attempt to retrieve the pay field, you will end up with a NULL value, not
an error code. The reason for this is that the JSON_VAL function cannot format the
field into an atomic value so it returns the NULL value instead.
Back to Top
<a id='nulls'></a>
Determining NULL Values in a Field
To determine whether a field exists you need use the "u" flag. If you use the "u" flag, the value returned will be either
Step22: The results contain 40 employees who have a middle initial field, and two that do not.
The results can be misleading because an employee can have the midinit field defined,
but no value assigned to it
Step23: If you only want to know how many employees have the middle initial field (midinit)
that is empty, compare the returned value to an empty string.
Step24: Back to Top
<a id='joins'></a>
Joining JSON Tables
You can join tables with JSON columns by using the JSON_VAL function
to compare two values
Step25: You need to ensure that the data types from both JSON functions are compatible for
the join to work properly. In this case, the department number and the work department
are both returned as 3-byte character strings. If you decided to use integers
instead or a smaller string size, the join will not work as expected because
the conversion will result in truncated or NULL values.
If you plan on doing joins between JSON objects, you may want to consider creating
indexes on the documents to speed up the join process. More information on the use
of indexes is found at the end of this chapter.
Back to Top
<a id='datatypes'></a>
JSON Data Types
If you are unsure of what data type a field contains, you can use the the JSON_TYPE
function to determine the type before retrieving the field.
The JSON_TYPE function has the format
Step26: The following SQL will generate a list of data types and field names found within this document.
Step27: The following sections will show how we can get atomic (non-array) types out of
a JSON document. We are not going to be specific which documents we want, other
than what field we want to retrieve.
A temporary table called SANDBOX is used throughout these examples
Step28: Back to Top
<a id='integers'></a>
JSON INTEGERS and BIGINT
Integers within JSON documents are easily identified as numbers that don't have a
decimal places in them. There are two different types of integers supported
within Db2 and are identified by the size (number of digits) in the number itself.
Integer - A set of digits that do not include a decimal place. The number cannot exceed -2,147,483,648 to 2,147,483,647.
Bigint - A set of digits that do not include a decimal place but exceed that of an integer. The number cannot exceed -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
You don't explicitly state the type of integer that you are using.
The system will detect the type based on its size.
The JSON_TYPE function will return a value of 16 for integers and 18 for a
large integer (BIGINT). To retrieve a value from an integer field you need to
use the "i" flag and "l" (lowercase L) for big integers.
This first SQL statement will create a regular integer field.
Step29: The JSON_TYPE function will verify that this is an integer field (Type=16).
Step30: You can retrieve an integer value with either the 'i' flag or the 'l' flag.
This first SQL statement retrieves the value as an integer.
Step31: We can ask that the value be interpreted as a BIGINT by using the 'l' flag,
so JSON_VAL will expand the size of the return value.
Step32: The next SQL statement will create a field with a BIGINT size. Note that we don't
need to specify anything other than have a very big number!
Step33: The JSON_TYPE function will verify that this is a big integer field (Type=18).
Step34: We can check to see that the data is stored in the document as a BIGINT by
using the JSON_TYPE function.
Step35: Returning the data as an integer type 'i' will fail since the number is too big
to fit into an integer format. Note that you do not get an error message -
a NULL value gets returned (which in Python is interpreted as None).
Step36: Specifying the 'l' flag will make the data be returned properly.
Step37: Since we have an integer in the JSON field, we also have the option of returning
the value as a floating-point number (f) or as a decimal number (n). Either of
these options will work with integer values.
Step38: Back to Top
<a id='numbers'></a>
JSON NUMBERS and FLOATING POINT
JSON numbers are recognized by Db2 when there is a decimal point in the value.
Floating point values are recognized using the Exx specifier after the number
which represents the power of 10 that needs to be applied to the base value.
For instance, 1.0E01 is the value 10.
The JSON type for numbers is 1, whether it is in floating point format or decimal format.
The SQL statement below inserts a salary into the table (using the
standard decimal place notation).
Step39: The JSON_TYPE function will verify that this is a numeric field (Type=1).
Step40: Numeric data can be retrieved in either number (n) formant, integer (i - note that
you will get truncation), or floating point (f).
Step41: You may wonder why number format (n) results in an answer that has a fractional
component that isn't exactly 92342.20. The reason is that Db2 is converting the
value to DECFLOAT(34) which supports a higher precision number, but can result in
fractions that can't be accurately represented within the binary format. Casting
the value to DEC(9,2) will properly format the number.
Step42: A floating-point number is recognized by the Exx specifier in the number. The
BSON function will tag this value as a number even though you specified it in floating
point format. The following SQL inserts the floating value into the table.
Step43: The JSON_TYPE function will verify that this is a floating point field (Type=1).
Step44: The floating-point value can be retrieved as a number, integer, or floating point value.
Step45: Back to Top
<a id='boolean'></a>
JSON BOOLEAN VALUES
JSON has a data type which can be true or false (boolean). Db2 doesn't have an
equivalent data type for boolean, so we need to retrieve it as an integer or
character string (true/false).
The JSON type for boolean values is 8.
The SQL statement below inserts a true and false value into the table.
Step46: We will double-check what type the field is in the JSON record.
Step47: To retrieve the value, we can ask that it be formatted as an integer or number.
Step48: You can also retrieve a boolean field as a character or
binary field, but the results are not what you would expect
with binary.
Step49: Back to Top
<a id='date'></a>
JSON DATE, TIME, and TIMESTAMPS
This first SQL statement will insert a JSON field that uses the $date modifier.
Step50: Querying the data type of this field using JSON_VAL will return a value of 9 (date type).
Step51: If you decide to use a character string to represent a date, you can use either
the "s
Step52: Using the 'd' specification will return the value as a date.
Step53: What about timestamps? If you decide to store a timestamp into a field, you can
retrieve it in a variety of ways. This first set of SQL statements will retrieve
it as a string.
Step54: Retrieving it as a Date will also work, but the time portion will be removed.
Step55: You can also ask for the timestamp value by using the 'ts'
specification. Note that you can't get just the time portion
unless you use a SQL function to cast it.
Step56: To force the value to return just the time portion, either
store the data as a time value (HH
Step57: Back to Top
<a id='strings'></a>
JSON Strings
For character strings, you must specify what the maximum
length is. This example will return the size of the lastname
field as 10 characters long.
Step58: You must specify a length for the 's' parameter otherwise
you will get an error from the function. If the size of the
character string is too large to return, then the function
will return a null value for that field.
Step59: Back to Top
<a id='arrays'></a>
Dealing with JSON Arrays
JSON arrays require specialized handling since there is no easy way to map an array to a single column in a relational table. Instead, there are a number of functions which will convert the array into a table so that you can access the individual elements in an SQL statement.
<a id='table'></a>
Accessing all Elements in an Array
The following query works because we do not treat the field phoneno as an array
Step60: By default, only the first number of an array is returned
when you use JSON_VAL. However, there will be situations
where you do want to return all the values in an array. This
is where the JSON_TABLE function must be used.
The format of the JSON_TABLE function is
Step61: The TABLE( ... ) specification in the FROM clause is used
for table functions. The results that are returned from the
TABLE function are treated the same as a traditional table.
To create a query that gives the name of every employee and their extensions would require the following query.
Step62: Only a subset of the results is shown above, but you will
see that there are multiple lines for employees who have
more than one extension.
The results of a TABLE function must be named (AS ...) if
you need to refer to the results of the TABLE function in
the SELECT list or in other parts of the SQL.
You can use other SQL operators to sort or organize the
results. For instance, we can use the ORDER BY operator to
find out which employees have the same extension. Note how
the TABLE function is named PHONES and the VALUES column is
renamed to PHONE.
Step63: You can even find out how many people are sharing
extensions! The HAVING clause tells Db2 to only return
groupings where there are more than one employee with the
same extension.
Step64: Back to Top
<a id='length'></a>
Determining the Size of an Array
The previous example showed how we could retrieve the
values from within an array of a document. Sometimes an
application needs to determine how many values are in the
array itself. The JSON_LEN function is used to figure out
what the array count is.
The format of the JSON_LEN function is
Step65: Back to Top
<a id='getpos'></a>
Finding a Value within an Array
The JSON_TABLE and JSON_LEN functions can be used to
retrieve all the values from an array, but searching for a
specific array value is difficult to do. One way to seach
array values is to extract everything using the JSON_TABLE
function.
Step66: An easier way is to use the JSON_GET_POS_ARR_INDEX function.
This function will search array values without having to
extract the array values with the JSON_TABLE function.
The format of the JSON_GET_POS_ARR_INDEX function is
Step67: If we used quotes around the phone number, the function will not match any of
the values in the table.
Back to Top
<a id='updating'></a>
Updating JSON Documents
There are a couple of approaches available to updating JSON
documents. One approach is to extract the document from the
table in a text form using BSON2JSON and then using string
functions or regular expressions to modify the data.
The other option is to use the JSON_UPDATE statement. The
syntax of the JSON_UPDATE function is
Step68: To add a new field to the record, the JSON_UPDATE function needs to specify the
field and value pair.
Step69: Retrieving the document shows that the lastname field has now been added to the record.
Step70: If you specify a field that is an array type and do not
specify an element, you will end up replacing the entire
field with the value.
Step71: Running the SQL against the original phone data will work properly.
Step72: To remove the phone number field you need to use the $unset keyword and set the field to null.
Step73: Back to Top
<a id='indexing'></a>
Indexing JSON Documents
Db2 supports computed indexes, which allows for the use
of functions like JSON_VAL to be used as part of the index
definition. For instance, searching for an employee number
will result in a scan against the table if no indexes are
defined
Step74: The following command will time the select statement.
Step75: To create an index on the empno field, we use the JSON_VAL function to extract the
empno from the JSON field.
Step76: Rerunning the SQL results in the following performance
Step77: Db2 can now use the index to retrieve the record and the following plot shows the increased throughput.
Step78: Back to Top
<a id='simple'></a>
Simplifying JSON SQL Inserts and Retrieval
From a development perspective, you always need to convert
documents to and from JSON using the BSON2JSON and JSON2BSON
functions. There are ways to hide these functions from an
application and simplify some of the programming.
One approach to simplifying the conversion of documents
between formats is to use INSTEAD OF triggers. These
triggers can intercept transactions before they are applied
to the base tables. This approach requires that we create a
view on top of an existing table.
The first step is to create the base table with two copies
of the JSON column. One will contain the original JSON
character string while the second will contain the converted
BSON. For this example, the JSON column will be called INFO,
and the BSON column will be called BSONINFO. The use of two
columns containing JSON would appear strange at first. The
reason for the two columns is that Db2 expects the BLOB
column to contain binary data. You cannot insert a character
string (JSON) into the BSON column without converting it
first. Db2 will raise an error so the JSON column is there
to avoid an error while the conversion takes place.
From a debugging perspective, we can keep both the CLOB and
BLOB values in this table if we want. The trigger will set
the JSON column to null after the BSON column has been
populated.
Step79: To use INSTEAD OF triggers, a view needs to be created on
top of the base table. Note that we explicitly use the
SYSTOOLS schema to make sure we are getting the correct
function used here.
Step80: At this point we can create three INSTEAD OF triggers to handle insert,
updates and deletes on the view.
On INSERT the DEFAULT keyword is used to generate the ID number, the JSON field is
set to NULL and the BSON column contains the converted value of the JSON string.
Step81: On UPDATES, the sequence number remains the same, and the BSON field is updated
with the contents of the JSON field.
Step82: Finally, the DELETE trigger will just remove the row.
Step83: Applications will only deal with the EMP_TXS view. Any
inserts will use the text version of the JSON and not have
to worry about using the JSON2BSON function since the
underlying INSTEAD OF trigger will take care of the
conversion.
The following insert statement only includes the JSON string
since the sequence number will be generated automatically as
part of the insert.
Step84: Selecting from the EMP_TXS view will return the JSON in a readable format
Step85: The base table only contains the BSON but the view translates the value back into a readable format.
An update statement that replaces the entire string works as expected.
Step86: If you want to manipulate the BSON directly (say change the employee number),
you need to refer to the BASE table instead.
Step87: And we can check it using our original view. | Python Code:
%run db2.ipynb
Explanation: <a id="top"></a>
Db2 JSON Features
There are a number of routines are that are built-in to Db2 that are used to manipulate JSON
documents. These routines are not externalized in the documentation because they were originally used by the
internal API's of Db2 for managing the MongoDB interface. These routines have now been made available for use in both Db2 10.5 and 11.1.
End of explanation
%%sql -q
CREATE FUNCTION SYSTOOLS.JSON_TABLE(
INJSON BLOB(16M), INELEM VARCHAR(2048), RETTYPE VARCHAR(100))
RETURNS TABLE(TYPE INTEGER, VALUE VARCHAR(2048))
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
NO EXTERNAL ACTION
DISALLOW PARALLEL
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonTable';
CREATE FUNCTION SYSTOOLS.JSON_TYPE(
INJSON BLOB(16M), INELEM VARCHAR(2048), MAXLENGTH INTEGER)
RETURNS INTEGER
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
RETURNS NULL ON NULL INPUT
NO EXTERNAL ACTION
EXTERNAL NAME 'db2json!jsonType';
CREATE FUNCTION SYSTOOLS.JSON_LEN(
INJSON BLOB(16M), INELEM VARCHAR(2048))
RETURNS INTEGER
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
NO EXTERNAL ACTION
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonLen';
CREATE FUNCTION SYSTOOLS.BSON2JSON(INBSON BLOB(16M)) RETURNS CLOB(16M)
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
NO EXTERNAL ACTION
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonBsonToJson';
CREATE FUNCTION SYSTOOLS.JSON2BSON(INJSON CLOB(16M)) RETURNS BLOB(16M)
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
NO EXTERNAL ACTION
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonToBson';
CREATE FUNCTION SYSTOOLS.JSON_GET_POS_ARR_INDEX(
INJSON BLOB(16M), QUERY VARCHAR(32672) FOR BIT DATA)
RETURNS INTEGER
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
CALLED ON NULL INPUT
NO EXTERNAL ACTION
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonGetPosArrIndex';
CREATE FUNCTION SYSTOOLS.JSON_UPDATE(
INJSON BLOB(16M), INELEM VARCHAR(32672))
RETURNS BLOB(16M)
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
CALLED ON NULL INPUT
NO EXTERNAL ACTION
SCRATCHPAD 2048
EXTERNAL NAME 'db2json!jsonUpdate2';
CREATE FUNCTION SYSTOOLS.BSON_VALIDATE(
INJSON BLOB(16M))
RETURNS INT
LANGUAGE C
PARAMETER STYLE SQL
PARAMETER CCSID UNICODE
NO SQL
NOT FENCED
DETERMINISTIC
ALLOW PARALLEL
RETURNS NULL ON NULL INPUT
NO EXTERNAL ACTION
EXTERNAL NAME 'db2json!jsonValidate';
Explanation: Table of Contents
Db2 JSON Functions and Registration
Path Requirements
<p>
* [Programming with Db2 JSON](#programming)
* [Inserting JSON Document (JSON2BSON)](#json2bson)
* [Retrieving a JSON Document (BSON2JSON)](#bson2json)
* [Checking a JSON Document for Consistency](#bsonvalidate)
<p>
* [Manipulating JSON Documents](#manipulating)
* [Retrieving Data](#retrieving)
* [Retrieving Individual Fields](#atomic)
* [Accessing Elements in an Array](#array)
* [Accessing Portions of a Structure](#structures)
* [Determining if a Field is Null](#nulls)
* [How to Join JSON Fields](#joins)
<p>
* [JSON Data Types](#datatypes)
* [INTEGERS and BIGINT](#integers)
* [NUMBERS and FLOATING POINT](#numbers)
* [BOOLEAN Values](#boolean)
* [DATE, TIME, and TIMESTAMPS](#date)
* [Strings](#strings)
<p>
* [Dealing with Arrays](#arrays)
* [Retrieving Array Values as a Table](#table)
* [Determining the Size of an Array](#length)
* [Searching an Array for a Value](#getpos)
<p>
* [Updating JSON Documents](#updating)
* [Indexing Fields in JSON Documents](#indexing)
* [How to Simplify JSON Inserts and Retrieval](#simple)
[Back to Top](#top)
<a id='json'></a>
# Db2 JSON Functions
There is one built-in Db2 JSON function and a number of other functions that must be registered within
Db2 before they can be used. The names of the functions and their purpose are described below.
- **JSON_VAL** - Extracts data from a JSON document into SQL data types
- **JSON_TABLE** - Returns a table of values for a document that has array types in it
- **JSON_TYPE** - Returns documents that have a field with a specific data type (like array, or date)
- **JSON_LEN** - Returns the count of elements in an array type inside a document
- **BSON2JSON** - Convert BSON formatted document into JSON strings
- **JSON2BSON** - Convert JSON strings into a BSON document format
- **JSON_GET_POS_ARR_INDEX** - Retrieve the index of a value within an array type in a document
- **JSON_UPDATE** - Update a particular field or document using set syntax
- **BSON_VALIDATE** - Checks to make sure that a BSON field in a BLOB object is in a correct format
Aside from the JSON_VAL function, all other functions in this list must be catalogued before first
being used. The next set of SQL will catalog all of these functions for you. Note: These functions have already
been catalogued.
End of explanation
%sql VALUES CURRENT PATH
Explanation: Back to Top
<a id='path'></a>
Path Statement Requirements
All of the Db2 JSON functions have been placed into the SYSTOOLS schema. This means that in order to execute
any of these commands, you must prefix the command with SYSTOOLS, as in SYSTOOLS.JSON2BSON. In order to
remove this requirement, you must update the CURRENT PATH value to include SYSTOOLS as part of it. The
SQL below will tell you what the current PATH is.
End of explanation
%sql SET CURRENT PATH = CURRENT PATH, SYSTOOLS
Explanation: If SYSTOOLS is not part of the path, you can update it with the following SQL.
End of explanation
%sql -q DROP TABLE TESTJSON
%%sql
CREATE TABLE TESTJSON
(
JSON_FIELD BLOB(4000) INLINE LENGTH 4000
)
Explanation: From this point on you won't need to added the SYSTOOLS schema on the front of any of your SQL
statements that refer to these Db2 JSON functions.
Back to Top
<a id='programming'></a>
Programming with the JSON SQL Functions
The functions that are listed below give you the ability to retrieve and manipulate JSON documents that
you store in a column within a table. What these functions do not do is let you explore the structure of a
JSON document. The assumption is that you are storing "known" JSON documents within the column and that
you have some knowledge of the underlying structure.
What this means is that none of the functions listed below will let you determine the fields that are
found within the document. You must already know what these fields and their structure (i.e. is it an array)
are and that you are either trying to extract some of these fields, or need to modify a field within the document.
If you need to determine the structure of the JSON document, you are better off using the JAVA APIs that are available for
manipulating these types of documents.
To store and retrieve an entire document from a column in a table, you would use:
BSON2JSON - Convert BSON formatted document into JSON strings
JSON2BSON - Convert JSON strings into a BSON document format
You can also verify the contents of a document that is stored in a column by using the BSON_VALIDATE function:
BSON_VALIDATE - Checks to make sure that a BSON field in a BLOB object is in a correct format
BSON is the binary format use to store JSON documents in MongoDB and is also used by Db2. Documents are always stored
in BLOBS (binary objects) which can be as large as 16M. BLOBs can be defined to be INLINE, which will result in
improved performance for any of these JSON functions. If you create a table with a BLOB column, try to use a size that
will fit within a Db2 page size. For instance, if you have a 32K page size for your data base, creating BLOB objects less
than 32000 bytes in size will result in better performance:
Python
CREATE TABLE JSON_EMP (
EMP_INFO BLOB(4000) INLINE LENGTH 4000
);
If a large object is not inlined, or greater than 32K in size, the resulting object will be placed into a
large table space. The end result is that BLOB objects will not be kept in bufferpools (which means a direct read is
required from disk for access to any BLOB object) and that two I/Os are required to get any document. One I/O is required to get the base page, while the second is needed to get the BLOB object. By using the INLINE option and keeping the BLOB size below the page size, we can avoid both of these performance overheads.
Creating Tables that support JSON Documents
To create a table that will store JSON data, you need to define the column so that is of
a binary type. The JSON field must be created as a BLOB (at the time of writing, the use
of the VARBINARY data type has not been verified) column. In order to ensure
good performance, you should have BLOB specified as INLINE if possible.
Of course, if your JSON object is greater than 32K, there is no way it will be able to sit on a
Db2 page, so you will need to use the large object format. However, if the object is significantly smaller than
32K, you will end up getting better performance if it can remain on one Db2 page.
End of explanation
%%sql
INSERT INTO TESTJSON VALUES ( JSON2BSON('{Name:"George"}') )
Explanation: Back to Top
<a id='json2bson'></a>
JSON2BSON: Inserting a JSON Document
Inserting into a column requires the use of the JSON2BSON function. The JSON2BSON function (and BSON2JSON) are used to transfer data in and out of a traditional Db2 BLOB column. There is no native JSON data type in Db2. Input to the JSON2BSON function must be a properly formatted JSON document. In the event that the document does not follow proper JSON rules, you will get an error code from the function.
End of explanation
%sql -j select bson2json(json_field) from testjson
%%sql
INSERT INTO TESTJSON VALUES
( JSON2BSON('{Name:, Age: 32}'))
Explanation: This is an example of a poorly formatted JSON document.
End of explanation
%%sql
SELECT CAST(JSON_FIELD AS VARCHAR(60)) FROM TESTJSON
Explanation: Back to Top
<a id='bson2json'></a>
BSON2JSON: Retrieving a JSON Document
Note that the data that is stored in a JSON column is in a special binary format called BSON. Selecting from the field will only result in hexadecimal characters being displayed.
End of explanation
%%sql -j
SELECT BSON2JSON(JSON_FIELD) FROM TESTJSON
Explanation: If you want to extract the entire contents of a JSON field, you need to use the BSON2JSON function.
End of explanation
%%sql
SELECT BSON_VALIDATE(JSON_FIELD) FROM TESTJSON
Explanation: One thing that you should note is that the JSON that is retrieved has been modified slightly so that
all of the values have quotes around them to avoid any ambiguity. Note that we didn't necessarily require them
when we input the data. For instance, our original JSON document what was inserted looked like this:
Python
{Name:"George"}
What gets returned is slightly different, but still considered to be the same JSON document. You must ensure that the
naming of any fields is consistent between documents. "Name", "name", and "NAME" are all considered different fields. One option is to use lowercase field names, or to use camel-case (first letter is capitalized) in all of your field definitions. The important thing is to keep the naming consistent so you can find the fields in the document.
Python
{"Name":"George"}
Back to Top
<a id='bsonvalidate'></a>
BSON_VALIDATE: Checking the format of a JSON document
There is no validation done against the contents of a BLOB column which contains JSON data.
As long as the JSON object is under program control and you are using the JSON functions,
you are probably not going to run across problems with the data. You should probably stick to either Db2 JSON
functions to access your JSON columns or the db2nosql (MongoDB syntax).
In the event you believe that a document is corrupted for some reason, you can use the BSON_VALIDATE to make sure it
is okay (or not!). The function will return a value of 1 if the record is okay, or a zero otherwise. The one row
that we have inserted into the TESTJSON table should be okay.
End of explanation
%%sql
UPDATE TESTJSON
SET JSON_FIELD = BLOB('!') || JSON_FIELD
Explanation: The following SQL will inject a bad value into the beginning of the JSON field to test the results from the
BSON_VALIDATE funtion.
End of explanation
%%sql
SELECT BSON_VALIDATE(JSON_FIELD) FROM TESTJSON
Explanation: The BSON_VALIDATE should return a zero for this particular row since it is not a valid BSON document.
End of explanation
%%sql
DROP TABLE JSON_EMP;
CREATE TABLE JSON_EMP
(
SEQ INT NOT NULL GENERATED ALWAYS AS IDENTITY,
EMP_DATA BLOB(4000) INLINE LENGTH 4000
);
INSERT INTO JSON_EMP(EMP_DATA) VALUES
JSON2BSON( '{ "empno":"000010", "firstnme":"CHRISTINE", "midinit":"I", "lastname":"HAAS", "workdept":"A00", "phoneno":[3978], "hiredate":"01/01/1995", "job":"PRES", "edlevel":18, "sex":"F", "birthdate":"08/24/1963", "pay" : { "salary":152750.00, "bonus":1000.00, "comm":4220.00} }'),
JSON2BSON( '{"empno":"000020","firstnme":"MICHAEL","lastname":"THOMPSON", "workdept":"B01","phoneno":[3476,1422],"hiredate":"10/10/2003", "job":"MANAGER","edlevel":18,"sex":"M","birthdate":"02/02/1978", "pay": {"salary":94250.00,"bonus":800.00,"comm":3300.00}}'),
JSON2BSON( '{"empno":"000030","firstnme":"SALLY","midinit":"A","lastname":"KWAN", "workdept":"C01","phoneno":[4738],"hiredate":"04/05/2005", "job":"MANAGER","edlevel":20,"sex":"F","birthdate":"05/11/1971", "pay": {"salary":98250.00,"bonus":800.00,"comm":3060.00} }'),
JSON2BSON( '{ "empno":"000050","firstnme":"JOHN","midinit":"B","lastname":"GEYER", "workdept":"E01","phoneno":[6789],"hiredate":"08/17/1979", "job":"MANAGER","edlevel":16,"sex":"M","birthdate":"09/15/1955", "pay": {"salary":80175.00,"bonus":800.00,"comm":3214.00} }'),
JSON2BSON( '{ "empno":"000060","firstnme":"IRVING","lastname":"STERN", "workdept":"D11","phoneno":[6423,2433],"hiredate":"09/14/2003", "job":"MANAGER","edlevel":16,"sex":"M","birthdate":"07/07/1975", "pay": {"salary":72250.00,"bonus":500.00,"comm":2580.00} }'),
JSON2BSON( '{ "empno":"000070","firstnme":"EVA","midinit":"D","lastname":"PULASKI", "workdept":"D21","phoneno":[7831,1422,4567],"hiredate":"09/30/2005", "job":"MANAGER","edlevel":16,"sex":"F","birthdate":"05/26/2003", "pay": {"salary":96170.00,"bonus":700.00,"comm":2893.00} }'),
JSON2BSON( '{ "empno":"000090","firstnme":"EILEEN","midinit":"W","lastname":"HENDERSON", "workdept":"E11","phoneno":[5498],"hiredate":"08/15/2000", "job":"MANAGER","edlevel":16,"sex":"F","birthdate":"05/15/1971", "pay": {"salary":89750.00,"bonus":600.00,"comm":2380.00} }'),
JSON2BSON( '{ "empno":"000100","firstnme":"THEODORE","midinit":"Q","lastname":"SPENSER", "workdept":"E21","phoneno":[0972],"hiredate":"06/19/2000", "job":"MANAGER","edlevel":14,"sex":"M","birthdate":"12/18/1980", "pay": {"salary":86150.00,"bonus":500.00,"comm":2092.00} }'),
JSON2BSON( '{ "empno":"000110","firstnme":"VINCENZO","midinit":"G","lastname":"LUCCHESSI", "workdept":"A00","phoneno":[3490,3567],"hiredate":"05/16/1988", "job":"SALESREP","edlevel":19,"sex":"M","birthdate":"11/05/1959", "pay": {"salary":66500.00,"bonus":900.00,"comm":3720.00} }'),
JSON2BSON( '{ "empno":"000120","firstnme":"SEAN","midinit":"","lastname":"O''CONNELL", "workdept":"A00","phoneno":[2167,1533],"hiredate":"12/05/1993", "job":"CLERK","edlevel":14,"sex":"M","birthdate":"10/18/1972", "pay": {"salary":49250.00,"bonus":600.00,"comm":2340.00} }'),
JSON2BSON( '{ "empno":"000130","firstnme":"DELORES","midinit":"M","lastname":"QUINTANA", "workdept":"C01","phoneno":[4578],"hiredate":"07/28/2001", "job":"ANALYST","edlevel":16,"sex":"F","birthdate":"09/15/1955", "pay": {"salary":73800.00,"bonus":500.00,"comm":1904.00} }'),
JSON2BSON( '{ "empno":"000140","firstnme":"HEATHER","midinit":"A","lastname":"NICHOLLS", "workdept":"C01","phoneno":[1793],"hiredate":"12/15/2006", "job":"ANALYST","edlevel":18,"sex":"F","birthdate":"01/19/1976", "pay": {"salary":68420.00,"bonus":600.00,"comm":2274.00} }'),
JSON2BSON( '{ "empno":"000150","firstnme":"BRUCE","midinit":"","lastname":"ADAMSON", "workdept":"D11","phoneno":[4510],"hiredate":"02/12/2002", "job":"DESIGNER","edlevel":16,"sex":"M","birthdate":"05/17/1977", "pay": {"salary":55280.00,"bonus":500.00,"comm":2022.00} }'),
JSON2BSON( '{ "empno":"000160","firstnme":"ELIZABETH","midinit":"R","lastname":"PIANKA", "workdept":"D11","phoneno":[3782,9322],"hiredate":"10/11/2006", "job":"DESIGNER","edlevel":17,"sex":"F","birthdate":"04/12/1980", "pay": {"salary":62250.00,"bonus":400.00,"comm":1780.00} }'),
JSON2BSON( '{ "empno":"000170","firstnme":"MASATOSHI","midinit":"J","lastname":"YOSHIMURA", "workdept":"D11","phoneno":[2890],"hiredate":"09/15/1999", "job":"DESIGNER","edlevel":16,"sex":"M","birthdate":"01/05/1981", "pay": {"salary":44680.00,"bonus":500.00,"comm":1974.00} }'),
JSON2BSON( '{ "empno":"000180","firstnme":"MARILYN","midinit":"S","lastname":"SCOUTTEN", "workdept":"D11","phoneno":[1682,9945],"hiredate":"07/07/2003", "job":"DESIGNER","edlevel":17,"sex":"F","birthdate":"02/21/1979", "pay": {"salary":51340.00,"bonus":500.00,"comm":1707.00} }'),
JSON2BSON( '{ "empno":"000190","firstnme":"JAMES","midinit":"H","lastname":"WALKER", "workdept":"D11","phoneno":[2986,3644],"hiredate":"07/26/2004", "job":"DESIGNER","edlevel":16,"sex":"M","birthdate":"06/25/1982", "pay": {"salary":50450.00,"bonus":400.00,"comm":1636.00} }'),
JSON2BSON( '{ "empno":"000200","firstnme":"DAVID","midinit":"","lastname":"BROWN", "workdept":"D11","phoneno":[4501,2522],"hiredate":"03/03/2002", "job":"DESIGNER","edlevel":16,"sex":"M","birthdate":"05/29/1971", "pay": {"salary":57740.00,"bonus":600.00,"comm":2217.00} }'),
JSON2BSON( '{ "empno":"000210","firstnme":"WILLIAM","midinit":"T","lastname":"JONES", "workdept":"","phoneno":[0942],"hiredate":"04/11/1998", "job":"DESIGNER","edlevel":17,"sex":"M","birthdate":"02/23/2003", "pay": {"salary":68270.00,"bonus":400.00,"comm":1462.00} }'),
JSON2BSON( '{ "empno":"000220","firstnme":"JENNIFER","midinit":"K","lastname":"LUTZ", "workdept":"D11","phoneno":[0672],"hiredate":"08/29/1998", "job":"DESIGNER","edlevel":18,"sex":"F","birthdate":"03/19/1978", "pay": {"salary":49840.00,"bonus":600.00,"comm":2387.00} }'),
JSON2BSON( '{ "empno":"000230","firstnme":"JAMES","midinit":"J","lastname":"JEFFERSON", "workdept":"D21","phoneno":[2094,8999,3756],"hiredate":"11/21/1996", "job":"CLERK","edlevel":14,"sex":"M","birthdate":"05/30/1980", "pay": {"salary":42180.00,"bonus":400.00,"comm":1774.00} }'),
JSON2BSON( '{ "empno":"000240","firstnme":"SALVATORE","midinit":"M","lastname":"MARINO", "workdept":"D21","phoneno":[3780],"hiredate":"12/05/2004", "job":"CLERK","edlevel":17,"sex":"M","birthdate":"03/31/2002", "pay": {"salary":48760.00,"bonus":600.00,"comm":2301.00} }'),
JSON2BSON( '{ "empno":"000250","firstnme":"DANIEL","midinit":"S","lastname":"SMITH", "workdept":"D21","phoneno":[0961],"hiredate":"10/30/1999", "job":"CLERK","edlevel":15,"sex":"M","birthdate":"11/12/1969", "pay": {"salary":49180.00,"bonus":400.00,"comm":1534.00} }'),
JSON2BSON( '{ "empno":"000260","firstnme":"SYBIL","midinit":"P","lastname":"JOHNSON", "workdept":"D21","phoneno":[8953,2533],"hiredate":"09/11/2005", "job":"CLERK","edlevel":16,"sex":"F","birthdate":"10/05/1976", "pay": {"salary":47250.00,"bonus":300.00,"comm":1380.00} }'),
JSON2BSON( '{ "empno":"000270","firstnme":"MARIA","midinit":"L","lastname":"PEREZ", "workdept":"D21","phoneno":[9001],"hiredate":"09/30/2006", "job":"CLERK","edlevel":15,"sex":"F","birthdate":"05/26/2003", "pay": {"salary":37380.00,"bonus":500.00,"comm":2190.00} }'),
JSON2BSON( '{ "empno":"000280","firstnme":"ETHEL","midinit":"R","lastname":"SCHNEIDER", "workdept":"E11","phoneno":[8997,1422],"hiredate":"03/24/1997", "job":"OPERATOR","edlevel":17,"sex":"F","birthdate":"03/28/1976", "pay": {"salary":36250.00,"bonus":500.00,"comm":2100.00} }'),
JSON2BSON( '{ "empno":"000290","firstnme":"JOHN","midinit":"R","lastname":"PARKER", "workdept":"E11","phoneno":[4502],"hiredate":"05/30/2006", "job":"OPERATOR","edlevel":12,"sex":"M","birthdate":"07/09/1985", "pay": {"salary":35340.00,"bonus":300.00,"comm":1227.00} }'),
JSON2BSON( '{ "empno":"000300","firstnme":"PHILIP","midinit":"X","lastname":"SMITH", "workdept":"E11","phoneno":[2095],"hiredate":"06/19/2002", "job":"OPERATOR","edlevel":14,"sex":"M","birthdate":"10/27/1976", "pay": {"salary":37750.00,"bonus":400.00,"comm":1420.00} }'),
JSON2BSON( '{ "empno":"000310","firstnme":"MAUDE","midinit":"F","lastname":"SETRIGHT", "workdept":"E11","phoneno":[3332,8005],"hiredate":"09/12/1994", "job":"OPERATOR","edlevel":12,"sex":"F","birthdate":"04/21/1961", "pay": {"salary":35900.00,"bonus":300.00,"comm":1272.00} }'),
JSON2BSON( '{ "empno":"000320","firstnme":"RAMLAL","midinit":"V","lastname":"MEHTA", "workdept":"E21","phoneno":[9990,1533],"hiredate":"07/07/1995", "job":"FIELDREP","edlevel":16,"sex":"M","birthdate":"08/11/1962", "pay": {"salary":39950.00,"bonus":400.00,"comm":1596.00} }'),
JSON2BSON( '{ "empno":"000330","firstnme":"WING","midinit":"","lastname":"LEE", "workdept":"E21","phoneno":[2103,2453],"hiredate":"02/23/2006", "job":"FIELDREP","edlevel":14,"sex":"M","birthdate":"07/18/1971", "pay": {"salary":45370.00,"bonus":500.00,"comm":2030.00} }'),
JSON2BSON( '{ "empno":"000340","firstnme":"JASON","midinit":"R","lastname":"GOUNOT", "workdept":"E21","phoneno":[5698,7744],"hiredate":"05/05/1977", "job":"FIELDREP","edlevel":16,"sex":"M","birthdate":"05/17/1956", "pay": {"salary":43840.00,"bonus":500.00,"comm":1907.00} }'),
JSON2BSON( '{ "empno":"200010","firstnme":"DIAN","midinit":"J","lastname":"HEMMINGER", "workdept":"A00","phoneno":[3978,2564],"hiredate":"01/01/1995", "job":"SALESREP","edlevel":18,"sex":"F","birthdate":"08/14/1973", "pay": {"salary":46500.00,"bonus":1000.00,"comm":4220.00} }'),
JSON2BSON( '{ "empno":"200120","firstnme":"GREG","midinit":"","lastname":"ORLANDO", "workdept":"A00","phoneno":[2167,1690],"hiredate":"05/05/2002", "job":"CLERK","edlevel":14,"sex":"M","birthdate":"10/18/1972", "pay": {"salary":39250.00,"bonus":600.00,"comm":2340.00} }'),
JSON2BSON( '{ "empno":"200140","firstnme":"KIM","midinit":"N","lastname":"NATZ", "workdept":"C01","phoneno":[1793],"hiredate":"12/15/2006", "job":"ANALYST","edlevel":18,"sex":"F","birthdate":"01/19/1976", "pay": {"salary":68420.00,"bonus":600.00,"comm":2274.00} }'),
JSON2BSON( '{ "empno":"200170","firstnme":"KIYOSHI","midinit":"","lastname":"YAMAMOTO", "workdept":"D11","phoneno":[2890],"hiredate":"09/15/2005", "job":"DESIGNER","edlevel":16,"sex":"M","birthdate":"01/05/1981", "pay": {"salary":64680.00,"bonus":500.00,"comm":1974.00} }'),
JSON2BSON( '{ "empno":"200220","firstnme":"REBA","midinit":"K","lastname":"JOHN", "workdept":"D11","phoneno":[0672],"hiredate":"08/29/2005", "job":"DESIGNER","edlevel":18,"sex":"F","birthdate":"03/19/1978", "pay": {"salary":69840.00,"bonus":600.00,"comm":2387.00} }'),
JSON2BSON( '{ "empno":"200240","firstnme":"ROBERT","midinit":"M","lastname":"MONTEVERDE", "workdept":"D21","phoneno":[3780,6823],"hiredate":"12/05/2004", "job":"CLERK","edlevel":17,"sex":"M","birthdate":"03/31/1984", "pay": {"salary":37760.00,"bonus":600.00,"comm":2301.00} }'),
JSON2BSON( '{ "empno":"200280","firstnme":"EILEEN","midinit":"R","lastname":"SCHWARTZ", "workdept":"E11","phoneno":[8997,9410],"hiredate":"03/24/1997", "job":"OPERATOR","edlevel":17,"sex":"F","birthdate":"03/28/1966", "pay": {"salary":46250.00,"bonus":500.00,"comm":2100.00} }'),
JSON2BSON( '{ "empno":"200310","firstnme":"MICHELLE","midinit":"F","lastname":"SPRINGER", "workdept":"E11","phoneno":[3332,7889],"hiredate":"09/12/1994", "job":"OPERATOR","edlevel":12,"sex":"F","birthdate":"04/21/1961", "pay": {"salary":35900.00,"bonus":300.00,"comm":1272.00} }'),
JSON2BSON( '{ "empno":"200330","firstnme":"HELENA","midinit":"","lastname":"WONG", "workdept":"E21","phoneno":[2103],"hiredate":"02/23/2006", "job":"FIELDREP","edlevel":14,"sex":"F","birthdate":"07/18/1971", "pay": {"salary":35370.00,"bonus":500.00,"comm":2030.00} }'),
JSON2BSON( '{ "empno":"200340","firstnme":"ROY","midinit":"R","lastname":"ALONZO", "workdept":"E21","phoneno":[5698,1533],"hiredate":"07/05/1997", "job":"FIELDREP","edlevel":16,"sex":"M","birthdate":"05/17/1956", "pay": {"salary":31840.00,"bonus":500.00,"comm":1907.00} }')
;
Explanation: Back to Top
<a id='manipulating'></a>
Manipulating JSON Documents
The last section described how we can insert and retrieve entire JSON documents from a column in a table. This
section will explore a number of functions that allow access to individual fields within the JSON document. These
functions are:
JSON_VAL - Extracts data from a JSON document into SQL data types
JSON_TABLE - Returns a table of values for a document that has array types in it
JSON_TYPE - Returns documents that have a field with a specific data type (like array, or date)
JSON_LEN - Returns the count of elements in an array type inside a document
JSON_GET_POS_ARR_INDEX - Retrieve the index of a value within an array type in a document
Our examples in this section will require a couple of tables to be created.
Sample JSON Table Creation
The following SQL will load the JSON_EMP table with a number of JSON objects. These records are modelled
around the SAMPLE database JSON_EMP table.
End of explanation
%sql SELECT COUNT(*) FROM JSON_EMP
Explanation: We can check the count of records to make sure that 42 employees were added to our table.
End of explanation
%%sql -q
DROP TABLE JSON_DEPT;
CREATE TABLE JSON_DEPT
(
SEQ INT NOT NULL GENERATED ALWAYS AS IDENTITY,
DEPT_DATA BLOB(4000) INLINE LENGTH 4000
);
INSERT INTO JSON_DEPT(DEPT_DATA) VALUES
JSON2BSON('{"deptno":"A00", "mgrno":"000010", "admrdept":"A00", "deptname":"SPIFFY COMPUTER SERVICE DIV."}'),
JSON2BSON('{"deptno":"B01", "mgrno":"000020", "admrdept":"A00", "deptname":"PLANNING" }'),
JSON2BSON('{"deptno":"C01", "mgrno":"000030", "admrdept":"A00", "deptname":"INFORMATION CENTER" }'),
JSON2BSON('{"deptno":"D01", "admrdept":"A00", "deptname":"DEVELOPMENT CENTER" }'),
JSON2BSON('{"deptno":"D11", "mgrno":"000060", "admrdept":"D01", "deptname":"MANUFACTURING SYSTEMS" }'),
JSON2BSON('{"deptno":"D21", "mgrno":"000070", "admrdept":"D01", "deptname":"ADMINISTRATION SYSTEMS" }'),
JSON2BSON('{"deptno":"E01", "mgrno":"000050", "admrdept":"A00", "deptname":"SUPPORT SERVICES" }'),
JSON2BSON('{"deptno":"E11", "mgrno":"000090", "admrdept":"E01", "deptname":"OPERATIONS" }'),
JSON2BSON('{"deptno":"E21", "mgrno":"000100", "admrdept":"E01", "deptname":"SOFTWARE SUPPORT" }'),
JSON2BSON('{"deptno":"F22", "admrdept":"E01", "deptname":"BRANCH OFFICE F2" }'),
JSON2BSON('{"deptno":"G22", "admrdept":"E01", "deptname":"BRANCH OFFICE G2" }'),
JSON2BSON('{"deptno":"H22", "admrdept":"E01", "deptname":"BRANCH OFFICE H2" }'),
JSON2BSON('{"deptno":"I22", "admrdept":"E01", "deptname":"BRANCH OFFICE I2" }'),
JSON2BSON('{"deptno":"J22", "admrdept":"E01", "deptname":"BRANCH OFFICE J2" }')
;
Explanation: Additional DEPARTMENT Table
In addition to the JSON_EMP table, the following SQL will generate a traditional table called JSON_DEPT
that can be used to determine the name of the department an individual works in.
End of explanation
%%sql
SELECT trim(JSON_VAL(EMP_DATA,'lastname','s:40')),
JSON_VAL(EMP_DATA,'pay.salary','f')
FROM JSON_EMP
WHERE
JSON_VAL(EMP_DATA,'empno','s:6') = '200170'
Explanation: Back to Top
<a id='retrieving'></a>
Retrieving Data from a BSON Document
Now that we have inserted some JSON data into a table, this section will explore
the use of the JSON_VAL function to retrieve individual fields from the documents.
This built-in function will return a value from a document in a format that you specify.
The ability to dynamically change the returned data type is extremely important when we
examine index creation in another section.
The JSON_VAL function has the format:
Python
JSON_VAL(document, field, type)
JSON_VAL takes 3 arguments:
- document - BSON document
- field - The field we are looking for (search path)
- type - The return type of data being returned
The search path and type must be constants - they cannot be variables so their
use in user-defined functions is limited to using constants.
A typical JSON record will contain a variety of data types and structures as
illustrated by the following record from the JSON_EMP table.
Python
{
"empno":"200170",
"firstnme":"KIYOSHI",
"midinit":"",
"lastname":"YAMAMOTO",
"workdept":"D11",
"phoneno":[2890],
"hiredate":"09/15/2005",
"job":"DESIGNER",
"edlevel":16,
"sex":"M",
"birthdate":"01/05/1981",
"pay": {
"salary":64680.00,
"bonus":500.00,
"comm":1974.00
}
}
There are number of fields with different formats, including strings (firstnme),
integers (edlevel), decimal (salary), date (hiredate), a number array (phoneno),
and a structure (pay). JSON data can consist of nested objects, arrays and very
complex structures. The format of a JSON object is checked when using the JSON2BSON
function and an error message will be issued if it does not conform to the
JSON specification.
The JSON_VAL function needs to know how to return the data type back from the JSON
record, so you need to specify what the format should be. The possible formats are:
| Code | Format |
|:------|:----------------------------------------------|
| n | DECFLOAT |
| i | INTEGER |
| I | BIGINT (notice this is a lowercase L) |
| f | DOUBLE |
| d | DATE |
| ts | TIMESTAMP (6) |
| t | TIME |
| s:n | A VARCHAR with a size of n being the maximum |
| b:n | A BINARY value with n being the maximum |
| u | Null check (0=null 1=not null) |
Back to Top
<a id='atomic'></a>
Retrieving Atomic Values
This first example will retrieve the name and salary of the employee whose employee
number is "200170"
End of explanation
%%sql
SELECT JSON_VAL(EMP_DATA,'lastname','s:7')
FROM JSON_EMP
WHERE
JSON_VAL(EMP_DATA,'empno','s:6') = '000010'
Explanation: If the size of the field being returned is larger that the field specification,
you will get a NULL value returned, not a truncated value.
End of explanation
%%sql
SELECT LEFT(JSON_VAL(EMP_DATA,'lastname','s:20'),7)
FROM JSON_EMP
WHERE
JSON_VAL(EMP_DATA,'empno','s:6') = '200170'
Explanation: In the case of character fields, you may need to specify a larger return
size and then truncate it to get a subset of the data.
End of explanation
%sql -a SELECT JSON_VAL(EMP_DATA, 'phoneno.0', 'i') FROM JSON_EMP
Explanation: Back to Top
<a id='array'></a>
Retrieving Array Values
Selecting data from an array type will always give you the first value (element zero).
The employees all have extension numbers but some of them have more than one.
Some of the extensions start with a zero so since the column is being treated as an
integer you will get only 3 digits. It's probably better to define it as a character
string rather than a number! Note that
End of explanation
%sql SELECT JSON_VAL(EMP_DATA, 'phoneno', 'i:na') FROM JSON_EMP
Explanation: If you specify ":na" after the type specifier, you will get an error if the field
is an array type. Hopefully you already know the format of your JSON data and can
avoid having to check to see if arrays exist. What this statement will tell you is
that one of the records you were attempting to retrieve was an array type. In fact,
all the phone extensions are being treated as array types even though they have only
one value in many cases.
End of explanation
%%sql
SELECT JSON_VAL(EMP_DATA,'pay.salary','i'),
JSON_VAL(EMP_DATA,'pay.bonus','i'),
JSON_VAL(EMP_DATA,'pay.comm','i')
FROM JSON_EMP
WHERE
JSON_VAL(EMP_DATA,'empno','s:6') = '200170'
Explanation: If you need to access a specific array element in a field, you can use the "dot"
notation after the field name. The first element starts at zero. If we select
the 2nd element (.1) all the employees that have a second extension will have a
value retrieved while the ones who don't will have a null value.
Back to Top
<a id='structures'></a>
Retrieving Structured Fields
Structured fields are retrieved using the same dot notation as arrays.
The field is specified by using the "field.subfield" format and these fields can be
an arbitrary number of levels deep.
The pay field in the employee record is made up of three additional fields.
Python
"pay": {
"salary":64680.00,
"bonus":500.00,
"comm":1974.00
}
To retrieve these three fields, you need to explictly name them since
retrieving pay alone will not work.
End of explanation
%%sql -a
SELECT JSON_VAL(EMP_DATA,'lastname','s:30'),
JSON_VAL(EMP_DATA,'midinit','u')
FROM JSON_EMP
ORDER BY 2
Explanation: If you attempt to retrieve the pay field, you will end up with a NULL value, not
an error code. The reason for this is that the JSON_VAL function cannot format the
field into an atomic value so it returns the NULL value instead.
Back to Top
<a id='nulls'></a>
Determining NULL Values in a Field
To determine whether a field exists you need use the "u" flag. If you use the "u" flag, the value returned will be either:
- 1 - The field exists, and it has a value (not null or empty string)
- 0 - The field exists, but the value is null or empty
- null - The field does not exist
In the JSON_EMP table, there are a few employees who do not have middle names.
The following query will return a value or 1, 0, or NULL depending on whether the
middle name exists for a record.
End of explanation
%%sql
SELECT COUNT(*) FROM JSON_EMP
WHERE JSON_VAL(EMP_DATA,'midinit','s:40') = '' OR
JSON_VAL(EMP_DATA,'midinit','u') IS NULL
Explanation: The results contain 40 employees who have a middle initial field, and two that do not.
The results can be misleading because an employee can have the midinit field defined,
but no value assigned to it:
Python
{
"empno":"000120",
"firstnme":"SEAN",
"midinit":"",
"lastname":"O''CONNELL",...
}
In this case, the employee does not have a middle name, but the field is present.
To determine whether an employee does not have a middle name, you will need to check
for a NULL value (the field does not exist) or the field exists and contains no content.
End of explanation
%%sql
SELECT COUNT(*) FROM JSON_EMP
WHERE JSON_VAL(EMP_DATA,'midinit','s:40') = ''
Explanation: If you only want to know how many employees have the middle initial field (midinit)
that is empty, compare the returned value to an empty string.
End of explanation
%%sql
SELECT JSON_VAL(EMP_DATA,'empno','s:6') AS EMPNO,
JSON_VAL(EMP_DATA,'lastname','s:20') AS LASTNAME,
JSON_VAL(DEPT_DATA,'deptname','s:30') AS DEPTNAME
FROM JSON_EMP, JSON_DEPT
WHERE
JSON_VAL(DEPT_DATA,'deptno','s:3') =
JSON_VAL(EMP_DATA,'workdept','s:3')
FETCH FIRST 5 ROWS ONLY
Explanation: Back to Top
<a id='joins'></a>
Joining JSON Tables
You can join tables with JSON columns by using the JSON_VAL function
to compare two values:
End of explanation
%%sql -q
DROP TABLE TYPES;
CREATE TABLE TYPES
(DATA BLOB(4000) INLINE LENGTH 4000);
INSERT INTO TYPES VALUES
JSON2BSON(
'{
"string" : "string",
"integer" : 1,
"number" : 1.1,
"date" : {"$date": "2016-06-20T13:00:00"},
"boolean" : true,
"array" : [1,2,3],
"object" : {type: "main", phone: [1,2,3]}
}');
Explanation: You need to ensure that the data types from both JSON functions are compatible for
the join to work properly. In this case, the department number and the work department
are both returned as 3-byte character strings. If you decided to use integers
instead or a smaller string size, the join will not work as expected because
the conversion will result in truncated or NULL values.
If you plan on doing joins between JSON objects, you may want to consider creating
indexes on the documents to speed up the join process. More information on the use
of indexes is found at the end of this chapter.
Back to Top
<a id='datatypes'></a>
JSON Data Types
If you are unsure of what data type a field contains, you can use the the JSON_TYPE
function to determine the type before retrieving the field.
The JSON_TYPE function has the format:
Python
ID = JSON_TYPE(document, field, 2048)
JSON_TYPE takes 3 arguments:
- document - BSON document
- field - The field we are looking for (search path)
- search path size - 2048 is the required value
The 2048 specifies the maximum length of the field parameter and should be
left at this value.
When querying the data types within a JSON document, the following values are returned.
ID | TYPE | ID | TYPE
---:| :------------------| -------: | :---------------------
1 | Double | 10 | Null
2 | String | 11 | Regular Expression
3 | Object | 12 | Future use
4 | Array | 13 | JavaScript
5 | Binary data | 14 | Symbol
6 | Undefined | 15 | Javascript (with scope)
7 | Object id | 16 | 32-bit integer
8 | Boolean | 17 | Timestamp
9 | Date | 18 | 64-bit integer
The next SQL statement will create a table with standard types within it.
End of explanation
%%sql
SELECT 'STRING',JSON_TYPE(DATA, 'string', 2048) FROM TYPES
UNION ALL
SELECT 'INTEGER',JSON_TYPE(DATA, 'integer', 2048) FROM TYPES
UNION ALL
SELECT 'NUMBER',JSON_TYPE(DATA, 'number', 2048) FROM TYPES
UNION ALL
SELECT 'DATE',JSON_TYPE(DATA, 'date', 2048) FROM TYPES
UNION ALL
SELECT 'BOOLEAN', JSON_TYPE(DATA, 'boolean', 2048) FROM TYPES
UNION ALL
SELECT 'ARRAY', JSON_TYPE(DATA, 'array', 2048) FROM TYPES
UNION ALL
SELECT 'OBJECT', JSON_TYPE(DATA, 'object', 2048) FROM TYPES
Explanation: The following SQL will generate a list of data types and field names found within this document.
End of explanation
%%sql -q
DROP TABLE SANDBOX;
CREATE TABLE SANDBOX (DATA BLOB(4000) INLINE LENGTH 4000);
Explanation: The following sections will show how we can get atomic (non-array) types out of
a JSON document. We are not going to be specific which documents we want, other
than what field we want to retrieve.
A temporary table called SANDBOX is used throughout these examples:
End of explanation
%%sql
INSERT INTO SANDBOX VALUES
JSON2BSON('{"count":9782333}')
Explanation: Back to Top
<a id='integers'></a>
JSON INTEGERS and BIGINT
Integers within JSON documents are easily identified as numbers that don't have a
decimal places in them. There are two different types of integers supported
within Db2 and are identified by the size (number of digits) in the number itself.
Integer - A set of digits that do not include a decimal place. The number cannot exceed -2,147,483,648 to 2,147,483,647.
Bigint - A set of digits that do not include a decimal place but exceed that of an integer. The number cannot exceed -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
You don't explicitly state the type of integer that you are using.
The system will detect the type based on its size.
The JSON_TYPE function will return a value of 16 for integers and 18 for a
large integer (BIGINT). To retrieve a value from an integer field you need to
use the "i" flag and "l" (lowercase L) for big integers.
This first SQL statement will create a regular integer field.
End of explanation
%%sql
SELECT JSON_TYPE(DATA,'count',2048) AS TYPE
FROM SANDBOX
Explanation: The JSON_TYPE function will verify that this is an integer field (Type=16).
End of explanation
%sql SELECT JSON_VAL(DATA,'count','i') FROM SANDBOX
Explanation: You can retrieve an integer value with either the 'i' flag or the 'l' flag.
This first SQL statement retrieves the value as an integer.
End of explanation
%sql SELECT JSON_VAL(DATA,'count','l') FROM SANDBOX
Explanation: We can ask that the value be interpreted as a BIGINT by using the 'l' flag,
so JSON_VAL will expand the size of the return value.
End of explanation
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"count":94123512223422}');
Explanation: The next SQL statement will create a field with a BIGINT size. Note that we don't
need to specify anything other than have a very big number!
End of explanation
%sql SELECT JSON_TYPE(DATA,'count',2048) AS TYPE FROM SANDBOX
Explanation: The JSON_TYPE function will verify that this is a big integer field (Type=18).
End of explanation
%sql SELECT JSON_TYPE(DATA,'count',2048) FROM SANDBOX
Explanation: We can check to see that the data is stored in the document as a BIGINT by
using the JSON_TYPE function.
End of explanation
%sql SELECT JSON_VAL(DATA,'count','i') FROM SANDBOX
Explanation: Returning the data as an integer type 'i' will fail since the number is too big
to fit into an integer format. Note that you do not get an error message -
a NULL value gets returned (which in Python is interpreted as None).
End of explanation
%sql SELECT JSON_VAL(DATA,'count','l') FROM SANDBOX
Explanation: Specifying the 'l' flag will make the data be returned properly.
End of explanation
%%sql
SELECT JSON_VAL(DATA,'count','n') AS DECIMAL,
JSON_VAL(DATA,'count','f') AS FLOAT
FROM SANDBOX
Explanation: Since we have an integer in the JSON field, we also have the option of returning
the value as a floating-point number (f) or as a decimal number (n). Either of
these options will work with integer values.
End of explanation
%%sql -q
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"salary":92342.20}');
Explanation: Back to Top
<a id='numbers'></a>
JSON NUMBERS and FLOATING POINT
JSON numbers are recognized by Db2 when there is a decimal point in the value.
Floating point values are recognized using the Exx specifier after the number
which represents the power of 10 that needs to be applied to the base value.
For instance, 1.0E01 is the value 10.
The JSON type for numbers is 1, whether it is in floating point format or decimal format.
The SQL statement below inserts a salary into the table (using the
standard decimal place notation).
End of explanation
%sql SELECT JSON_TYPE(DATA,'salary',2048) AS TYPE FROM SANDBOX
Explanation: The JSON_TYPE function will verify that this is a numeric field (Type=1).
End of explanation
%%sql
SELECT JSON_VAL(DATA,'salary','n') AS DECIMAL,
JSON_VAL(DATA,'salary','i') AS INTEGER,
JSON_VAL(DATA,'salary','f') AS FLOAT
FROM SANDBOX
Explanation: Numeric data can be retrieved in either number (n) formant, integer (i - note that
you will get truncation), or floating point (f).
End of explanation
%sql SELECT DEC(JSON_VAL(DATA,'salary','n'),9,2) AS DECIMAL FROM SANDBOX
Explanation: You may wonder why number format (n) results in an answer that has a fractional
component that isn't exactly 92342.20. The reason is that Db2 is converting the
value to DECFLOAT(34) which supports a higher precision number, but can result in
fractions that can't be accurately represented within the binary format. Casting
the value to DEC(9,2) will properly format the number.
End of explanation
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"salary":9.2523E01}');
Explanation: A floating-point number is recognized by the Exx specifier in the number. The
BSON function will tag this value as a number even though you specified it in floating
point format. The following SQL inserts the floating value into the table.
End of explanation
%sql SELECT JSON_TYPE(DATA,'salary',2048) AS TYPE FROM SANDBOX
Explanation: The JSON_TYPE function will verify that this is a floating point field (Type=1).
End of explanation
%%sql
SELECT JSON_VAL(DATA,'salary','n') AS DECIMAL,
JSON_VAL(DATA,'salary','i') AS INTEGER,
JSON_VAL(DATA,'salary','f') AS FLOAT
FROM SANDBOX
Explanation: The floating-point value can be retrieved as a number, integer, or floating point value.
End of explanation
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"valid":true, "invalid":false}');
Explanation: Back to Top
<a id='boolean'></a>
JSON BOOLEAN VALUES
JSON has a data type which can be true or false (boolean). Db2 doesn't have an
equivalent data type for boolean, so we need to retrieve it as an integer or
character string (true/false).
The JSON type for boolean values is 8.
The SQL statement below inserts a true and false value into the table.
End of explanation
%sql SELECT JSON_TYPE(DATA,'valid',2048) AS TYPE FROM SANDBOX
Explanation: We will double-check what type the field is in the JSON record.
End of explanation
%%sql
SELECT JSON_VAL(DATA,'valid','n') AS TRUE_DECIMAL,
JSON_VAL(DATA,'valid','i') AS TRUE_INTEGER,
JSON_VAL(DATA,'invalid','n') AS FALSE_DECIMAL,
JSON_VAL(DATA,'invalid','i') AS FALSE_INTEGER
FROM SANDBOX
Explanation: To retrieve the value, we can ask that it be formatted as an integer or number.
End of explanation
%%sql
SELECT JSON_VAL(DATA,'valid','s:5') AS TRUE_STRING,
JSON_VAL(DATA,'valid','b:2') AS TRUE_BINARY,
JSON_VAL(DATA,'invalid','s:5') AS FALSE_STRING,
JSON_VAL(DATA,'invalid','b:2') AS FALSE_BINARY
FROM SANDBOX
Explanation: You can also retrieve a boolean field as a character or
binary field, but the results are not what you would expect
with binary.
End of explanation
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"today":{"$date":"2016-07-01T12:00:00"}}');
Explanation: Back to Top
<a id='date'></a>
JSON DATE, TIME, and TIMESTAMPS
This first SQL statement will insert a JSON field that uses the $date modifier.
End of explanation
%sql SELECT JSON_TYPE(DATA,'today',2048) FROM SANDBOX
Explanation: Querying the data type of this field using JSON_VAL will return a value of 9 (date type).
End of explanation
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"today":"2016-07-01"}');
SELECT JSON_VAL(DATA,'today','s:10') FROM SANDBOX;
Explanation: If you decide to use a character string to represent a date, you can use either
the "s:x" specification to return the date as a string,
or use "d" to have it displayed as a date. This first SQL
statement returns the date as a string.
End of explanation
%sql SELECT JSON_VAL(DATA,'today','d') FROM SANDBOX
Explanation: Using the 'd' specification will return the value as a date.
End of explanation
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"today":"' || VARCHAR(NOW()) || '"}');
SELECT JSON_VAL(DATA,'today','s:30') FROM SANDBOX;
Explanation: What about timestamps? If you decide to store a timestamp into a field, you can
retrieve it in a variety of ways. This first set of SQL statements will retrieve
it as a string.
End of explanation
%sql SELECT JSON_VAL(DATA,'today','d') FROM SANDBOX
Explanation: Retrieving it as a Date will also work, but the time portion will be removed.
End of explanation
%sql SELECT JSON_VAL(DATA,'today','ts') FROM SANDBOX
Explanation: You can also ask for the timestamp value by using the 'ts'
specification. Note that you can't get just the time portion
unless you use a SQL function to cast it.
End of explanation
%sql SELECT TIME(JSON_VAL(DATA,'today','ts')) FROM SANDBOX
Explanation: To force the value to return just the time portion, either
store the data as a time value (HH:MM:SS) string or store a
timestamp and use the TIME function to extract just that
portion of the timestamp.
End of explanation
%sql SELECT JSON_VAL(EMP_DATA, 'lastname', 's:10') FROM JSON_EMP
Explanation: Back to Top
<a id='strings'></a>
JSON Strings
For character strings, you must specify what the maximum
length is. This example will return the size of the lastname
field as 10 characters long.
End of explanation
%sql SELECT JSON_VAL(EMP_DATA, 'lastname', 's:8') FROM JSON_EMP
Explanation: You must specify a length for the 's' parameter otherwise
you will get an error from the function. If the size of the
character string is too large to return, then the function
will return a null value for that field.
End of explanation
%sql SELECT JSON_VAL(EMP_DATA, 'phoneno', 'i') FROM JSON_EMP
Explanation: Back to Top
<a id='arrays'></a>
Dealing with JSON Arrays
JSON arrays require specialized handling since there is no easy way to map an array to a single column in a relational table. Instead, there are a number of functions which will convert the array into a table so that you can access the individual elements in an SQL statement.
<a id='table'></a>
Accessing all Elements in an Array
The following query works because we do not treat the field phoneno as an array:
End of explanation
%%sql
SELECT PHONES.TYPE, CAST(PHONES.VALUE AS VARCHAR(10)) AS VALUE
FROM JSON_EMP E,
TABLE( JSON_TABLE(E.EMP_DATA,'phoneno','i') ) AS PHONES
WHERE JSON_VAL(E.EMP_DATA,'empno','s:6') = '000010'
Explanation: By default, only the first number of an array is returned
when you use JSON_VAL. However, there will be situations
where you do want to return all the values in an array. This
is where the JSON_TABLE function must be used.
The format of the JSON_TABLE function is:
Python
JSON_TABLE(document, field, type)
The arguments are:
document - BSON document
field - The field we are looking for
type - The return type of data being returned
JSON_TABLE returns two columns: Type and Value. The type
is one of a possible 18 values found in the table below. The
Value is the actual contents of the field.
ID | TYPE | ID | TYPE
---:| :------------------| -------: | :---------------------
1 | Double | 10 | Null
2 | String | 11 | Regular Expression
3 | Object | 12 | Future use
4 | Array | 13 | JavaScript
5 | Binary data | 14 | Symbol
6 | Undefined | 15 | Javascript (with scope)
7 | Object id | 16 | 32-bit integer
8 | Boolean | 17 | Timestamp
9 | Date | 18 | 64-bit integer
The TYPE field is probably something you wouldn't require
as part of your queries since you are already specifying the
return type in the function.
The format of the JSON_TABLE function is like JSON_VAL
except that it returns a table of values. You must use this
function as part of FROM clause and a table function
specification. For example, to return the contents of the
phone extension array for just one employee (000230) we can
use the following JSON_TABLE function.
End of explanation
%%sql -a
SELECT JSON_VAL(E.EMP_DATA, 'lastname', 's:10') AS LASTNAME,
CAST(PHONES.VALUE AS VARCHAR(10)) AS PHONE
FROM JSON_EMP E,
TABLE( JSON_TABLE(E.EMP_DATA,'phoneno','i') ) AS PHONES
Explanation: The TABLE( ... ) specification in the FROM clause is used
for table functions. The results that are returned from the
TABLE function are treated the same as a traditional table.
To create a query that gives the name of every employee and their extensions would require the following query.
End of explanation
%%sql
SELECT JSON_VAL(E.EMP_DATA, 'lastname', 's:10') AS LASTNAME,
CAST (PHONES.VALUE AS VARCHAR(10)) AS PHONE
FROM JSON_EMP E,
TABLE( JSON_TABLE(E.EMP_DATA,'phoneno','i') ) AS PHONES
ORDER BY PHONE
Explanation: Only a subset of the results is shown above, but you will
see that there are multiple lines for employees who have
more than one extension.
The results of a TABLE function must be named (AS ...) if
you need to refer to the results of the TABLE function in
the SELECT list or in other parts of the SQL.
You can use other SQL operators to sort or organize the
results. For instance, we can use the ORDER BY operator to
find out which employees have the same extension. Note how
the TABLE function is named PHONES and the VALUES column is
renamed to PHONE.
End of explanation
%%sql
SELECT CAST(PHONES.VALUE AS VARCHAR(10)) AS PHONE, COUNT(*) AS COUNT
FROM JSON_EMP E,
TABLE( JSON_TABLE(E.EMP_DATA,'phoneno','i') ) AS PHONES
GROUP BY PHONES.VALUE HAVING COUNT(*) > 1
ORDER BY PHONES.VALUE
Explanation: You can even find out how many people are sharing
extensions! The HAVING clause tells Db2 to only return
groupings where there are more than one employee with the
same extension.
End of explanation
%%sql
SELECT JSON_VAL(E.EMP_DATA, 'lastname', 's:10') AS LASTNAME,
JSON_LEN(E.EMP_DATA, 'phoneno') AS PHONE_COUNT
FROM JSON_EMP E
Explanation: Back to Top
<a id='length'></a>
Determining the Size of an Array
The previous example showed how we could retrieve the
values from within an array of a document. Sometimes an
application needs to determine how many values are in the
array itself. The JSON_LEN function is used to figure out
what the array count is.
The format of the JSON_LEN function is:
Python
count = JSON_LEN(document,field)
The arguments are:
document - BSON document
field - The field we are looking for
count - Number of array entries or NULL if the field is not an array
If the field is not an array, this function will return a
null value, otherwise it will give you the number of values
in the array. In our previous example, we could determine
the number of extensions per person by taking advantage of
the JSON_LEN function.
End of explanation
%%sql
SELECT JSON_VAL(E.EMP_DATA, 'lastname', 's:10') AS LASTNAME,
CAST(PHONES.VALUE AS VARCHAR(10)) AS PHONE
FROM JSON_EMP E,
TABLE( JSON_TABLE(E.EMP_DATA,'phoneno','i') ) AS PHONES
WHERE PHONES.VALUE = 1422
Explanation: Back to Top
<a id='getpos'></a>
Finding a Value within an Array
The JSON_TABLE and JSON_LEN functions can be used to
retrieve all the values from an array, but searching for a
specific array value is difficult to do. One way to seach
array values is to extract everything using the JSON_TABLE
function.
End of explanation
%%sql
SELECT JSON_VAL(EMP_DATA, 'lastname', 's:10') AS LASTNAME
FROM JSON_EMP
WHERE JSON_GET_POS_ARR_INDEX(EMP_DATA,
JSON2BSON('{"phoneno":1422}')) >= 0
Explanation: An easier way is to use the JSON_GET_POS_ARR_INDEX function.
This function will search array values without having to
extract the array values with the JSON_TABLE function.
The format of the JSON_GET_POS_ARR_INDEX function is:
Python
element = JSON_GET_POS_ARR_INDEX(document, field)
The arguments are:
- document - BSON document
- field - The field we are looking for and its value
- element - The first occurrence of the value in the array
The format of the field argument is "{field:value}" and it needs to be in
BSON format. This means you needs to add the JSON2BSON
function around the field specification.
Python
JSON2BSON( '{"field":"value"}' )
This function only tests for equivalence and the data type should match what is
already in the field. The return value is the position
within the array that the value was found, where the first
element starts at zero.
In our JSON_EMP table, each employee has one or more phone
numbers. The following SQL will retrieve all employees who
have the extension 1422:
End of explanation
%%sql
DELETE FROM SANDBOX;
INSERT INTO SANDBOX VALUES
JSON2BSON('{"phone":["1111","2222","3333"]}');
Explanation: If we used quotes around the phone number, the function will not match any of
the values in the table.
Back to Top
<a id='updating'></a>
Updating JSON Documents
There are a couple of approaches available to updating JSON
documents. One approach is to extract the document from the
table in a text form using BSON2JSON and then using string
functions or regular expressions to modify the data.
The other option is to use the JSON_UPDATE statement. The
syntax of the JSON_UPDATE function is:
Python
JSON_UPDATE(document, '{$set: {field:value}}')
JSON_UPDATE(document, '{$unset: {field:null}}')
The arguments are:
- document - BSON document
- field - The field we are looking for
- value - The value we want to set the field to
There are three possible outcomes from using the JSON_UPDATE statement:
- If the field is found, the existing value is replaced with the new one
- If the field is not found, the field:value pair is added to the document
- Using the $unset keyword and setting the value to the null will remove the field from the document
The field can specify a portion of a structure, or an element of an
array using the dot notation. The following SQL will
illustrate how values can be added and removed from a document.
A single record that contains 3 phone number extensions are
added to a table:
End of explanation
%%sql
UPDATE SANDBOX
SET DATA =
JSON_UPDATE(DATA,'{ $set: {"lastname":"HAAS"}}')
Explanation: To add a new field to the record, the JSON_UPDATE function needs to specify the
field and value pair.
End of explanation
%sql -j SELECT BSON2JSON(DATA) FROM SANDBOX
Explanation: Retrieving the document shows that the lastname field has now been added to the record.
End of explanation
%%sql -j
UPDATE SANDBOX
SET DATA =
JSON_UPDATE(DATA,'{ $set: {"phone":"9999"}}');
SELECT BSON2JSON(DATA) FROM SANDBOX;
%sql -j SELECT BSON2JSON(DATA) FROM SANDBOX
Explanation: If you specify a field that is an array type and do not
specify an element, you will end up replacing the entire
field with the value.
End of explanation
%%sql -j
UPDATE SANDBOX
SET DATA =
JSON_UPDATE(DATA,'{ $set: {"phone.0":9999}}');
SELECT BSON2JSON(DATA) FROM SANDBOX;
Explanation: Running the SQL against the original phone data will work properly.
End of explanation
%%sql -j
UPDATE SANDBOX
SET DATA =
JSON_UPDATE(DATA,'{ $unset: {"phone":null}}');
SELECT BSON2JSON(DATA) FROM SANDBOX;
Explanation: To remove the phone number field you need to use the $unset keyword and set the field to null.
End of explanation
%%sql -q
DROP INDEX IX_JSON;
SELECT JSON_VAL(EMP_DATA, 'lastname', 's:20') AS LASTNAME
FROM JSON_EMP
WHERE JSON_VAL(EMP_DATA, 'empno', 's:6') = '000010';
Explanation: Back to Top
<a id='indexing'></a>
Indexing JSON Documents
Db2 supports computed indexes, which allows for the use
of functions like JSON_VAL to be used as part of the index
definition. For instance, searching for an employee number
will result in a scan against the table if no indexes are
defined:
End of explanation
noindex = %sql -t \
SELECT JSON_VAL(EMP_DATA, 'lastname', 's:20') AS LASTNAME \
FROM JSON_EMP \
WHERE JSON_VAL(EMP_DATA, 'empno', 's:6') = '000010'
Explanation: The following command will time the select statement.
End of explanation
%%sql
CREATE INDEX IX_JSON ON JSON_EMP
(JSON_VAL(EMP_DATA,'empno','s:6'));
Explanation: To create an index on the empno field, we use the JSON_VAL function to extract the
empno from the JSON field.
End of explanation
withindex = %sql -t \
SELECT JSON_VAL(EMP_DATA, 'lastname', 's:20') AS LASTNAME \
FROM JSON_EMP \
WHERE JSON_VAL(EMP_DATA, 'empno', 's:6') = '000010' \
Explanation: Rerunning the SQL results in the following performance:
End of explanation
%%sql -pb
WITH RESULTS(RUN, RESULT) AS (
VALUES ('No Index',:noindex),('With Index',:withindex)
)
SELECT * FROM RESULTS
Explanation: Db2 can now use the index to retrieve the record and the following plot shows the increased throughput.
End of explanation
%%sql -q
DROP TABLE BASE_EMP_TXS;
CREATE TABLE BASE_EMP_TXS (
SEQNO INT NOT NULL GENERATED ALWAYS AS IDENTITY,
INFO VARCHAR(4000),
BSONINFO BLOB(4000) INLINE LENGTH 4000
);
Explanation: Back to Top
<a id='simple'></a>
Simplifying JSON SQL Inserts and Retrieval
From a development perspective, you always need to convert
documents to and from JSON using the BSON2JSON and JSON2BSON
functions. There are ways to hide these functions from an
application and simplify some of the programming.
One approach to simplifying the conversion of documents
between formats is to use INSTEAD OF triggers. These
triggers can intercept transactions before they are applied
to the base tables. This approach requires that we create a
view on top of an existing table.
The first step is to create the base table with two copies
of the JSON column. One will contain the original JSON
character string while the second will contain the converted
BSON. For this example, the JSON column will be called INFO,
and the BSON column will be called BSONINFO. The use of two
columns containing JSON would appear strange at first. The
reason for the two columns is that Db2 expects the BLOB
column to contain binary data. You cannot insert a character
string (JSON) into the BSON column without converting it
first. Db2 will raise an error so the JSON column is there
to avoid an error while the conversion takes place.
From a debugging perspective, we can keep both the CLOB and
BLOB values in this table if we want. The trigger will set
the JSON column to null after the BSON column has been
populated.
End of explanation
%%sql
CREATE OR REPLACE VIEW EMP_TXS AS
(SELECT SEQNO, BSON2JSON(BSONINFO) AS INFO FROM BASE_EMP_TXS)
Explanation: To use INSTEAD OF triggers, a view needs to be created on
top of the base table. Note that we explicitly use the
SYSTOOLS schema to make sure we are getting the correct
function used here.
End of explanation
%%sql -d
CREATE OR REPLACE TRIGGER I_EMP_TXS
INSTEAD OF INSERT ON EMP_TXS
REFERENCING NEW AS NEW_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
INSERT INTO BASE_EMP_TXS VALUES (
DEFAULT,
NULL,
SYSTOOLS.JSON2BSON(NEW_TXS.INFO)
);
END
@
Explanation: At this point we can create three INSTEAD OF triggers to handle insert,
updates and deletes on the view.
On INSERT the DEFAULT keyword is used to generate the ID number, the JSON field is
set to NULL and the BSON column contains the converted value of the JSON string.
End of explanation
%%sql -d
CREATE OR REPLACE TRIGGER U_EMP_TXS
INSTEAD OF UPDATE ON EMP_TXS
REFERENCING NEW AS NEW_TXS OLD AS OLD_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE BASE_EMP_TXS
SET (INFO, BSONINFO) = (NULL,
SYSTOOLS.JSON2BSON(NEW_TXS.INFO))
WHERE
BASE_EMP_TXS.SEQNO = OLD_TXS.SEQNO;
END
@
Explanation: On UPDATES, the sequence number remains the same, and the BSON field is updated
with the contents of the JSON field.
End of explanation
%%sql -d
CREATE OR REPLACE TRIGGER D_EMP_TX
INSTEAD OF DELETE ON EMP_TXS
REFERENCING OLD AS OLD_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
DELETE FROM BASE_EMP_TXS
WHERE
BASE_EMP_TXS.SEQNO = OLD_TXS.SEQNO;
END
@
Explanation: Finally, the DELETE trigger will just remove the row.
End of explanation
%%sql
INSERT INTO EMP_TXS(INFO) VALUES (
'{
"empno":"000010",
"firstnme":"CHRISTINE",
"midinit":"I",
"lastname":"HAAS",
"workdept":"A00",
"phoneno":[3978],
"hiredate":"01/01/1995",
"job":"PRES",
"edlevel":18,
"sex":"F",
"birthdate":"08/24/1963",
"pay" : {
"salary":152750.00,
"bonus":1000.00,
"comm":4220.00}
}')
Explanation: Applications will only deal with the EMP_TXS view. Any
inserts will use the text version of the JSON and not have
to worry about using the JSON2BSON function since the
underlying INSTEAD OF trigger will take care of the
conversion.
The following insert statement only includes the JSON string
since the sequence number will be generated automatically as
part of the insert.
End of explanation
%sql -j SELECT INFO FROM EMP_TXS
Explanation: Selecting from the EMP_TXS view will return the JSON in a readable format:
End of explanation
%%sql -j
UPDATE EMP_TXS SET INFO = '{"empno":"000010"}' WHERE SEQNO = 1;
SELECT INFO FROM EMP_TXS;
Explanation: The base table only contains the BSON but the view translates the value back into a readable format.
An update statement that replaces the entire string works as expected.
End of explanation
%%sql
UPDATE BASE_EMP_TXS
SET BSONINFO = JSON_UPDATE(BSONINFO,
'{$set: {"empno":"111111"}}')
WHERE SEQNO = 1
Explanation: If you want to manipulate the BSON directly (say change the employee number),
you need to refer to the BASE table instead.
End of explanation
%sql -j SELECT INFO FROM EMP_TXS
Explanation: And we can check it using our original view.
End of explanation |
8,490 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TMY to Power Tutorial
This tutorial will walk through the process of going from TMY data to AC power using the SAPM.
Table of contents
Step1: Load TMY data
pvlib comes with a couple of TMY files, and we'll use one of them for simplicity. You could also load a file from disk, or specify a url. See this NREL website for a list of TMY files
Step2: The file handling above looks complicated because we're trying to account for the many different ways that people will run this notebook on their systems. You can just put a simple string path into the readtmy3 function if you know where the file is.
Let's look at the imported version of the TMY file.
Step3: This is a pandas DataFrame object. It has a lot of great properties that are beyond the scope of our tutorials.
Plot the GHI data from the TMY file
Step4: Calculate modeling intermediates
Before we can calculate power for all times in the TMY file, we will need to calculate
Step5: Solar position
Calculate the solar position for all times in the TMY file.
The default solar position algorithm is based on Reda and Andreas (2004). Our implementation is pretty fast, but you can make it even faster if you install numba and use add method='nrel_numba' to the function call below.
Step6: The funny looking jump in the azimuth is just due to the coarse time sampling in the TMY file.
DNI ET
Calculate extra terrestrial radiation. This is needed for many plane of array diffuse irradiance models.
Step7: Airmass
Calculate airmass. Lots of model options here, see the atmosphere module tutorial for more details.
Step8: The funny appearance is due to aliasing and setting invalid numbers equal to NaN. Replot just a day or two and you'll see that the numbers are right.
POA sky diffuse
Use the Hay Davies model to calculate the plane of array diffuse sky radiation. See the irradiance module tutorial for comparisons of different models.
Step9: POA ground diffuse
Calculate ground diffuse. We specified the albedo above. You could have also provided a string to the surface_type keyword argument.
Step10: AOI
Calculate AOI
Step11: Note that AOI has values greater than 90 deg. This is ok.
POA total
Calculate POA irradiance
Step12: Cell and module temperature
Calculate pv cell and module temperature
Step13: DC power using SAPM
Get module data.
Step14: Choose a particular module
Step15: Calculate the effective irradiance
Step16: Run the SAPM using the parameters we calculated above.
Step17: DC power using single diode
Step18: AC power using SAPM
Get the inverter database from the web
Step19: Choose a particular inverter
Step20: Plot just a few days.
Step21: Some statistics on the AC power
Step23: We can change the value of color value c to see the sensitivity of model accuracy to measured meterological conditions. It can be useful to define a simple plotting function for this kind of exploratory analysis.
Step25: Notice the use of the vmax keyword argument in the above example. The **kwargs pattern allows us to easily pass non-specified arguments to nested functions.
Step27: Next, we will assume that the SAPM model is representative of the real world performance so that we can use scipy's optimization routine to derive simulated PVUSA coefficients. You will need to install scipy to run these functions.
Here's one PVUSA reference | Python Code:
# built-in python modules
import os
import inspect
# scientific python add-ons
import numpy as np
import pandas as pd
# plotting stuff
# first line makes the plots appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
# finally, we import the pvlib library
import pvlib
Explanation: TMY to Power Tutorial
This tutorial will walk through the process of going from TMY data to AC power using the SAPM.
Table of contents:
1. Setup
2. Load TMY data
2. Calculate modeling intermediates
2. DC power using SAPM
2. AC power using SAPM
This tutorial requires pvlib >= 0.6.0.
Authors:
* Will Holmgren (@wholmgren), University of Arizona, July 2015, March 2016, August 2018.
* Rob Andrews (@Calama-Consulting), Heliolytics, June 2014
Setup
These are just your standard interactive scientific python imports that you'll get very used to using.
End of explanation
# Find the absolute file path to your pvlib installation
pvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib)))
# absolute path to a data file
datapath = os.path.join(pvlib_abspath, 'data', '703165TY.csv')
# read tmy data with year values coerced to a single year
tmy_data, meta = pvlib.tmy.readtmy3(datapath, coerce_year=2015)
tmy_data.index.name = 'Time'
# TMY data seems to be given as hourly data with time stamp at the end
# shift the index 30 Minutes back for calculation of sun positions
tmy_data = tmy_data.shift(freq='-30Min')['2015']
Explanation: Load TMY data
pvlib comes with a couple of TMY files, and we'll use one of them for simplicity. You could also load a file from disk, or specify a url. See this NREL website for a list of TMY files:
http://rredc.nrel.gov/solar/old_data/nsrdb/1991-2005/tmy3/by_state_and_city.html
End of explanation
tmy_data.head()
Explanation: The file handling above looks complicated because we're trying to account for the many different ways that people will run this notebook on their systems. You can just put a simple string path into the readtmy3 function if you know where the file is.
Let's look at the imported version of the TMY file.
End of explanation
tmy_data['GHI'].plot()
plt.ylabel('Irradiance (W/m**2)')
Explanation: This is a pandas DataFrame object. It has a lot of great properties that are beyond the scope of our tutorials.
Plot the GHI data from the TMY file
End of explanation
surface_tilt = 30
surface_azimuth = 180 # pvlib uses 0=North, 90=East, 180=South, 270=West convention
albedo = 0.2
# create pvlib Location object based on meta data
sand_point = pvlib.location.Location(meta['latitude'], meta['longitude'], tz='US/Alaska',
altitude=meta['altitude'], name=meta['Name'].replace('"',''))
print(sand_point)
Explanation: Calculate modeling intermediates
Before we can calculate power for all times in the TMY file, we will need to calculate:
* solar position
* extra terrestrial radiation
* airmass
* angle of incidence
* POA sky and ground diffuse radiation
* cell and module temperatures
First, define some PV system parameters.
End of explanation
solpos = pvlib.solarposition.get_solarposition(tmy_data.index, sand_point.latitude, sand_point.longitude)
solpos.plot()
Explanation: Solar position
Calculate the solar position for all times in the TMY file.
The default solar position algorithm is based on Reda and Andreas (2004). Our implementation is pretty fast, but you can make it even faster if you install numba and use add method='nrel_numba' to the function call below.
End of explanation
# the extraradiation function returns a simple numpy array
# instead of a nice pandas series. We will change this
# in a future version
dni_extra = pvlib.irradiance.get_extra_radiation(tmy_data.index)
dni_extra = pd.Series(dni_extra, index=tmy_data.index)
dni_extra.plot()
plt.ylabel('Extra terrestrial radiation (W/m**2)')
Explanation: The funny looking jump in the azimuth is just due to the coarse time sampling in the TMY file.
DNI ET
Calculate extra terrestrial radiation. This is needed for many plane of array diffuse irradiance models.
End of explanation
airmass = pvlib.atmosphere.get_relative_airmass(solpos['apparent_zenith'])
airmass.plot()
plt.ylabel('Airmass')
Explanation: Airmass
Calculate airmass. Lots of model options here, see the atmosphere module tutorial for more details.
End of explanation
poa_sky_diffuse = pvlib.irradiance.haydavies(surface_tilt, surface_azimuth,
tmy_data['DHI'], tmy_data['DNI'], dni_extra,
solpos['apparent_zenith'], solpos['azimuth'])
poa_sky_diffuse.plot()
plt.ylabel('Irradiance (W/m**2)')
Explanation: The funny appearance is due to aliasing and setting invalid numbers equal to NaN. Replot just a day or two and you'll see that the numbers are right.
POA sky diffuse
Use the Hay Davies model to calculate the plane of array diffuse sky radiation. See the irradiance module tutorial for comparisons of different models.
End of explanation
poa_ground_diffuse = pvlib.irradiance.get_ground_diffuse(surface_tilt, tmy_data['GHI'], albedo=albedo)
poa_ground_diffuse.plot()
plt.ylabel('Irradiance (W/m**2)')
Explanation: POA ground diffuse
Calculate ground diffuse. We specified the albedo above. You could have also provided a string to the surface_type keyword argument.
End of explanation
aoi = pvlib.irradiance.aoi(surface_tilt, surface_azimuth, solpos['apparent_zenith'], solpos['azimuth'])
aoi.plot()
plt.ylabel('Angle of incidence (deg)')
Explanation: AOI
Calculate AOI
End of explanation
poa_irrad = pvlib.irradiance.poa_components(aoi, tmy_data['DNI'], poa_sky_diffuse, poa_ground_diffuse)
poa_irrad.plot()
plt.ylabel('Irradiance (W/m**2)')
plt.title('POA Irradiance')
Explanation: Note that AOI has values greater than 90 deg. This is ok.
POA total
Calculate POA irradiance
End of explanation
pvtemps = pvlib.pvsystem.sapm_celltemp(poa_irrad['poa_global'], tmy_data['Wspd'], tmy_data['DryBulb'])
pvtemps.plot()
plt.ylabel('Temperature (C)')
Explanation: Cell and module temperature
Calculate pv cell and module temperature
End of explanation
sandia_modules = pvlib.pvsystem.retrieve_sam(name='SandiaMod')
Explanation: DC power using SAPM
Get module data.
End of explanation
sandia_module = sandia_modules.Canadian_Solar_CS5P_220M___2009_
sandia_module
Explanation: Choose a particular module
End of explanation
effective_irradiance = pvlib.pvsystem.sapm_effective_irradiance(poa_irrad.poa_direct, poa_irrad.poa_diffuse, airmass, aoi, sandia_module)
Explanation: Calculate the effective irradiance
End of explanation
sapm_out = pvlib.pvsystem.sapm(effective_irradiance, pvtemps.temp_cell, sandia_module)
print(sapm_out.head())
sapm_out[['p_mp']].plot()
plt.ylabel('DC Power (W)')
Explanation: Run the SAPM using the parameters we calculated above.
End of explanation
cec_modules = pvlib.pvsystem.retrieve_sam(name='CECMod')
cec_module = cec_modules.Canadian_Solar_CS5P_220M
d = {k: cec_module[k] for k in ['a_ref', 'I_L_ref', 'I_o_ref', 'R_sh_ref', 'R_s']}
photocurrent, saturation_current, resistance_series, resistance_shunt, nNsVth = (
pvlib.pvsystem.calcparams_desoto(poa_irrad.poa_global,
pvtemps['temp_cell'],
cec_module['alpha_sc'],
EgRef=1.121,
dEgdT=-0.0002677, **d))
single_diode_out = pvlib.pvsystem.singlediode(photocurrent, saturation_current,
resistance_series, resistance_shunt, nNsVth)
single_diode_out[['p_mp']].plot()
plt.ylabel('DC Power (W)')
Explanation: DC power using single diode
End of explanation
sapm_inverters = pvlib.pvsystem.retrieve_sam('sandiainverter')
Explanation: AC power using SAPM
Get the inverter database from the web
End of explanation
sapm_inverter = sapm_inverters['ABB__MICRO_0_25_I_OUTD_US_208_208V__CEC_2014_']
sapm_inverter
p_acs = pd.DataFrame()
p_acs['sapm'] = pvlib.pvsystem.snlinverter(sapm_out.v_mp, sapm_out.p_mp, sapm_inverter)
p_acs['sd'] = pvlib.pvsystem.snlinverter(single_diode_out.v_mp, single_diode_out.p_mp, sapm_inverter)
p_acs.plot()
plt.ylabel('AC Power (W)')
diff = p_acs['sapm'] - p_acs['sd']
diff.plot()
plt.ylabel('SAPM - SD Power (W)')
Explanation: Choose a particular inverter
End of explanation
p_acs.loc['2015-07-05':'2015-07-06'].plot()
Explanation: Plot just a few days.
End of explanation
p_acs.describe()
p_acs.sum()
# create data for a y=x line
p_ac_max = p_acs.max().max()
yxline = np.arange(0, p_ac_max)
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(111, aspect='equal')
sc = ax.scatter(p_acs['sd'], p_acs['sapm'], c=poa_irrad.poa_global, alpha=1)
ax.plot(yxline, yxline, 'r', linewidth=3)
ax.set_xlim(0, None)
ax.set_ylim(0, None)
ax.set_xlabel('Single Diode model')
ax.set_ylabel('Sandia model')
fig.colorbar(sc, label='POA Global (W/m**2)')
Explanation: Some statistics on the AC power
End of explanation
def sapm_sd_scatter(c_data, label=None, **kwargs):
Display a scatter plot of SAPM p_ac vs. single diode p_ac.
You need to re-execute this cell if you re-run the p_ac calculation.
Parameters
----------
c_data : array-like
Determines the color of each point on the scatter plot.
Must be same length as p_acs.
kwargs passed to ``scatter``.
Returns
-------
tuple of fig, ax objects
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(111, aspect='equal')
sc = ax.scatter(p_acs['sd'], p_acs['sapm'], c=c_data, alpha=1, **kwargs)
ax.plot(yxline, yxline, 'r', linewidth=3)
ax.set_xlim(0, None)
ax.set_ylim(0, None)
ax.set_xlabel('Single diode model power (W)')
ax.set_ylabel('Sandia model power (W)')
fig.colorbar(sc, label='{}'.format(label), shrink=0.75)
return fig, ax
sapm_sd_scatter(tmy_data.DryBulb, label='Temperature (deg C)')
sapm_sd_scatter(tmy_data.DNI, label='DNI (W/m**2)')
sapm_sd_scatter(tmy_data.AOD, label='AOD')
sapm_sd_scatter(tmy_data.Wspd, label='Wind speed', vmax=10)
Explanation: We can change the value of color value c to see the sensitivity of model accuracy to measured meterological conditions. It can be useful to define a simple plotting function for this kind of exploratory analysis.
End of explanation
def sapm_other_scatter(c_data, x_data, clabel=None, xlabel=None, aspect_equal=False, **kwargs):
Display a scatter plot of SAPM p_ac vs. something else.
You need to re-execute this cell if you re-run the p_ac calculation.
Parameters
----------
c_data : array-like
Determines the color of each point on the scatter plot.
Must be same length as p_acs.
x_data : array-like
kwargs passed to ``scatter``.
Returns
-------
tuple of fig, ax objects
fig = plt.figure(figsize=(12,12))
if aspect_equal:
ax = fig.add_subplot(111, aspect='equal')
else:
ax = fig.add_subplot(111)
sc = ax.scatter(x_data, p_acs['sapm'], c=c_data, alpha=1, cmap=mpl.cm.YlGnBu_r, **kwargs)
ax.set_xlim(0, None)
ax.set_ylim(0, None)
ax.set_xlabel('{}'.format(xlabel))
ax.set_ylabel('Sandia model power (W)')
fig.colorbar(sc, label='{}'.format(clabel), shrink=0.75)
return fig, ax
sapm_other_scatter(tmy_data.DryBulb, tmy_data.GHI, clabel='Temperature (deg C)', xlabel='GHI (W/m**2)')
Explanation: Notice the use of the vmax keyword argument in the above example. The **kwargs pattern allows us to easily pass non-specified arguments to nested functions.
End of explanation
def pvusa(pvusa_data, a, b, c, d):
Calculates system power according to the PVUSA equation
P = I * (a + b*I + c*W + d*T)
where
P is the output power,
I is the plane of array irradiance,
W is the wind speed, and
T is the temperature
Parameters
----------
pvusa_data : pd.DataFrame
Must contain the columns 'I', 'W', and 'T'
a : float
I coefficient
b : float
I*I coefficient
c : float
I*W coefficient
d : float
I*T coefficient
Returns
-------
power : pd.Series
Power calculated using the PVUSA model.
return pvusa_data['I'] * (a + b*pvusa_data['I'] + c*pvusa_data['W'] + d*pvusa_data['T'])
from scipy import optimize
pvusa_data = pd.DataFrame()
pvusa_data['I'] = poa_irrad.poa_global
pvusa_data['W'] = tmy_data.Wspd
pvusa_data['T'] = tmy_data.DryBulb
popt, pcov = optimize.curve_fit(pvusa, pvusa_data.dropna(), p_acs.sapm.values, p0=(.0001,0.0001,.001,.001))
print('optimized coefs:\n{}'.format(popt))
print('covariances:\n{}'.format(pcov))
power_pvusa = pvusa(pvusa_data, *popt)
fig, ax = sapm_other_scatter(tmy_data.DryBulb, power_pvusa, clabel='Temperature (deg C)',
aspect_equal=True, xlabel='PVUSA (W)')
maxmax = max(ax.get_xlim()[1], ax.get_ylim()[1])
ax.set_ylim(None, maxmax)
ax.set_xlim(None, maxmax)
ax.plot(np.arange(maxmax), np.arange(maxmax), 'r')
Explanation: Next, we will assume that the SAPM model is representative of the real world performance so that we can use scipy's optimization routine to derive simulated PVUSA coefficients. You will need to install scipy to run these functions.
Here's one PVUSA reference:
http://www.nrel.gov/docs/fy09osti/45376.pdf
End of explanation |
8,491 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Preprocessing using tf.transform and Dataflow </h1>
This notebook illustrates
Step1: You need to restart your kernel to register the new installs running the below cells
Step3: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
Step4: <h2> Create ML dataset using tf.transform and Dataflow </h2>
<p>
Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.
<p>
Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about <b>30 minutes</b> for me. If you wish to continue without doing this step, you can copy my preprocessed output | Python Code:
%%bash
conda update -y -n base -c defaults conda
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz
pip install apache-beam[gcp]==2.9.0
pip install apache-beam[gcp] tensorflow_transform==0.8.0
%%bash
pip freeze | grep -e 'flow\|beam'
Explanation: <h1> Preprocessing using tf.transform and Dataflow </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for Machine Learning using tf.transform and Dataflow
</ol>
<p>
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select `python2`. 
Then activate a Python 2 environment and install Apache Beam. Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.
* TFT 0.8.0
* TF 1.8 or higher
* Apache Beam [GCP] 2.5.0 or higher
End of explanation
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR PROJECT ID
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: You need to restart your kernel to register the new installs running the below cells
End of explanation
query=
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
Explanation: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
End of explanation
%writefile requirements.txt
tensorflow-transform==0.8.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'num_workers': 4,
'max_num_workers': 5,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print('Processing {} data from {}'.format(step, selquery))
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
Explanation: <h2> Create ML dataset using tf.transform and Dataflow </h2>
<p>
Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.
<p>
Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about <b>30 minutes</b> for me. If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
</pre>
End of explanation |
8,492 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy data structures
When we looked at python data structures, it was obvious that the only way to deal with arrays of values (matrices / vectors etc) would be via lists and lists of lists.
This is slow and inefficient in both execution and writing code.
Numpy attempts to fix this.
Step1: Speed
Step2: Views of the data (are free)
It costs very little to look at data in a different way (e.g. to view a 2D array as a 1D vector).
Making a copy is a different story
Step3: Exercise
Step4: Broadcasting is a way of looping on arrays which have "compatible" but unequal sizes.
For example, the element-wise multiplication of 2 arrays
python
a = np.array([1.0, 2.0, 3.0])
b = np.array([2.0, 2.0, 2.0])
print a * b
has an equivalent
Step5: Arrays are compatible as long as each of their dimensions (shape) is either equal to the other or 1.
Thus, above, the multplication works when a.shape is (1,3) and b.shape is either (1,3) or (1,1)
(Actually, these are (3,) and (1,) in the examples above ...
Step6: In multiple dimensions, the rule applies but, perhaps, is less immediately intuitive
Step7: Note that this also works for
Step8: But not for
Step9: Vector Operations
Step10: Exercise
Step11: Exercise | Python Code:
import numpy as np
## This is a list of everything in the module
np.__all__
an_array = np.array([0,1,2,3,4,5,6])
print an_array
print
print type(an_array)
print
help(an_array)
A = np.zeros((4,4))
print A
print
print A.shape
print
print A.diagonal()
print
A[0,0] = 2.0
print A
np.fill_diagonal(A, 1.0)
print A
B = A.diagonal()
B[0] = 2.0
for i in range(0,A.shape[0]):
A[i,i] = 1.0
print A
print
A[:,2] = 2.0
print A
print
A[2,:] = 4.0
print A
print
print A.T
print
A[...] = 0.0
print A
print
for i in range(0,A.shape[0]):
A[i,:] = float(i)
print A
print
for i in range(0,A.shape[0]):
A[i,:] = i
print A
print
print A[::2,::2]
print
print A[::-1,::-1]
Explanation: Numpy data structures
When we looked at python data structures, it was obvious that the only way to deal with arrays of values (matrices / vectors etc) would be via lists and lists of lists.
This is slow and inefficient in both execution and writing code.
Numpy attempts to fix this.
End of explanation
%%timeit
B = np.zeros((1000,1000))
for i in range(0,1000):
for j in range(0,1000):
B[i,j] = 2.0
%%timeit
B = np.zeros((1000,1000))
B[:,:] = 2.0
%%timeit
B = np.zeros((1000,1000))
B[...] = 2.0
Explanation: Speed
End of explanation
print A.reshape((2,8))
print
print A.reshape((-1))
print A.ravel()
print
print A.reshape((1,-1))
print
%%timeit
A.reshape((1,-1))
%%timeit
elements = A.shape[0]*A.shape[1]
B = np.empty(elements)
B[...] = A[:,:].ravel()
%%timeit
elements = A.shape[0]*A.shape[1]
B = np.empty(elements)
for i in range(0,A.shape[0]):
for j in range(0,A.shape[1]):
B[i+j*A.shape[1]] = A[i,j]
Explanation: Views of the data (are free)
It costs very little to look at data in a different way (e.g. to view a 2D array as a 1D vector).
Making a copy is a different story
End of explanation
AA = np.zeros((100,100))
AA[10,11] = 1.0
AA[99,1] = 2.0
cond = np.where(AA >= 1.0)
print cond
print AA[cond]
print AA[ AA >= 1]
Explanation: Exercise: Try this again for a 10000x10000 array
Indexing / broadcasting
In numpy, we can index an array by explicitly specifying elements, by specifying slices, by supplying lists of indices (or arrays), we can also supply a boolean array of the same shape as the original array which will select / return an array of all those entries where True applies.
Although some of these might seem difficult to use, they are often the result of other numpy operations. For example np.where converts a truth array to a list of indices.
End of explanation
a = np.array([1.0, 2.0, 3.0])
b = np.array([2.0, 2.0, 2.0])
print a * b
b = np.array([2.0])
print a * b
print a * 2.0
Explanation: Broadcasting is a way of looping on arrays which have "compatible" but unequal sizes.
For example, the element-wise multiplication of 2 arrays
python
a = np.array([1.0, 2.0, 3.0])
b = np.array([2.0, 2.0, 2.0])
print a * b
has an equivalent:
python
a = np.array([1.0, 2.0, 3.0])
b = np.array([2.0])
print a * b
or
python
a = np.array([1.0, 2.0, 3.0])
b = 2.0
print a * b
in which the "appropriate" interpretation of b is made in each case to achieve the result.
End of explanation
print a.shape
print b.shape
print (a*b).shape
print (a+b).shape
aa = a.reshape(1,3)
bb = b.reshape(1,1)
print aa.shape
print bb.shape
print (aa*bb).shape
print (aa+bb).shape
Explanation: Arrays are compatible as long as each of their dimensions (shape) is either equal to the other or 1.
Thus, above, the multplication works when a.shape is (1,3) and b.shape is either (1,3) or (1,1)
(Actually, these are (3,) and (1,) in the examples above ...
End of explanation
a = np.array([[ 0.0, 0.0, 0.0],
[10.0,10.0,10.0],
[20.0,20.0,20.0],
[30.0,30.0,30.0]])
b = np.array([[1.0,2.0,3.0]])
print a + b
print
print a.shape
print b.shape
print (a+b).shape
Explanation: In multiple dimensions, the rule applies but, perhaps, is less immediately intuitive:
End of explanation
a = np.array([[ 0.0, 0.0, 0.0],
[10.0,10.0,10.0],
[20.0,20.0,20.0],
[30.0,30.0,30.0]])
b = np.array([1.0,2.0,3.0])
print a + b
print
print a.shape
print b.shape
print (a+b).shape
Explanation: Note that this also works for
End of explanation
a = np.array([[ 0.0, 0.0, 0.0],
[10.0,10.0,10.0],
[20.0,20.0,20.0],
[30.0,30.0,30.0]])
b = np.array([[1.0],[2.0],[3.0]])
print a.shape
print b.shape
print (a+b).shape
Explanation: But not for
End of explanation
X = np.arange(0.0, 2.0*np.pi, 0.0001)
print X[0:100]
import math
math.sin(X)
np.sin(X)
S = np.sin(X)
C = np.cos(X)
S2 = S**2 + C**2
print S2
print S2 - 1.0
test = np.isclose(S2,1.0)
print test
print np.where(test == False)
print np.where(S2 == 0.0)
Explanation: Vector Operations
End of explanation
X = np.linspace(0.0, 2.0*np.pi, 10000000)
print X.shape
# ...
%%timeit
S = np.sin(X)
%%timeit
S = np.empty_like(X)
for i, x in enumerate(X):
S[i] = math.sin(x)
X = np.linspace(0.0, 2.0*np.pi, 10000000)
Xj = X + 1.0j
print Xj.shape, Xj.dtype
%%timeit
Sj = np.sin(Xj)
import cmath
%%timeit
Sj = np.empty_like(Xj)
for i, x in enumerate(Xj):
Sj[i] = cmath.sin(x)
Explanation: Exercise: find out how long it takes to compute the sin, sqrt, power of a 1000000 length vector (array). How does this speed compare to using the normal math functions element by element in the array ? What happens if X is actually a complex array ?
Hints: you might find it useful to know about:
- np.linspace v np.arange
- np.empty_like or np.zeros_like
- the python enumerate function
- how to write a table in markdown
| description | time | notes |
|-----------------|--------|-------|
| np.sin | ? | |
| math.sin | ? | |
| | ? | - |
End of explanation
# Test the results here
A = np.array(([1.0,1.0,1.0,1.0],[2.0,2.0,2.0,2.0]))
B = np.array(([3.0,3.0,3.0,3.0],[4.0,4.0,4.0,4.0]))
C = np.array(([5.0,5.0,5.0,5.0],[6.0,6.0,6.0,6.0]))
R = np.concatenate((A,B,C))
print R
print
R = np.concatenate((A,B,C), axis=1)
print R
Explanation: Exercise: look through the functions below from numpy, choose 3 of them and work out how to use them on arrays of data. Write a few lines to explain what you find.
np.max v. np.argmax
np.where
np.logical_and
np.fill_diagonal
np.count_nonzero
np.isinf and np.isnan
Here is an example:
np.concatenate takes a number of arrays and glues them together. For 1D arrays this is simple:
```python
A = np.array([1.0,1.0,1.0,1.0])
B = np.array([2.0,2.0,2.0,2.0])
C = np.array([3.0,3.0,3.0,3.0])
R = np.concatenate((A,B,C))
array([ 1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.])
```
an equivalent statement is np.hstack((A,B,C)) but note the difference with np.vstack((A,B,C))
With higher dimensional arrays, the gluing takes place along one axis:
```python
A = np.array(([1.0,1.0,1.0,1.0],[2.0,2.0,2.0,2.0]))
B = np.array(([3.0,3.0,3.0,3.0],[4.0,4.0,4.0,4.0]))
C = np.array(([5.0,5.0,5.0,5.0],[6.0,6.0,6.0,6.0]))
R = np.concatenate((A,B,C))
print R
print
R = np.concatenate((A,B,C), axis=1)
print R
```
End of explanation |
8,493 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Effect Size
Examples and exercises for a tutorial on statistical inference.
Copyright 2016 Allen Downey
License
Step1: Part One
To explore statistics that quantify effect size, we'll look at the difference in height between men and women. I used data from the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the mean and standard deviation of height in cm for adult women and men in the U.S.
I'll use scipy.stats.norm to represent the distributions. The result is an rv object (which stands for random variable).
Step2: The following function evaluates the normal (Gaussian) probability density function (PDF) within 4 standard deviations of the mean. It takes and rv object and returns a pair of NumPy arrays.
Step3: Here's what the two distributions look like.
Step4: Let's assume for now that those are the true distributions for the population.
I'll use rvs to generate random samples from the population distributions. Note that these are totally random, totally representative samples, with no measurement error!
Step5: Both samples are NumPy arrays. Now we can compute sample statistics like the mean and standard deviation.
Step6: The sample mean is close to the population mean, but not exact, as expected.
Step7: And the results are similar for the female sample.
Now, there are many ways to describe the magnitude of the difference between these distributions. An obvious one is the difference in the means
Step8: On average, men are 14--15 centimeters taller. For some applications, that would be a good way to describe the difference, but there are a few problems
Step9: STOP HERE
Step10: A better, but slightly more complicated threshold is the place where the PDFs cross.
Step11: In this example, there's not much difference between the two thresholds.
Now we can count how many men are below the threshold
Step12: And how many women are above it
Step13: The "overlap" is the total area under the curves that ends up on the wrong side of the threshold.
Step14: Or in more practical terms, you might report the fraction of people who would be misclassified if you tried to use height to guess sex
Step15: Another way to quantify the difference between distributions is what's called "probability of superiority", which is a problematic term, but in this context it's the probability that a randomly-chosen man is taller than a randomly-chosen woman.
Exercise 2
Step17: Overlap (or misclassification rate) and "probability of superiority" have two good properties
Step18: Computing the denominator is a little complicated; in fact, people have proposed several ways to do it. This implementation uses the "pooled standard deviation", which is a weighted average of the standard deviations of the two groups.
And here's the result for the difference in height between men and women.
Step20: Most people don't have a good sense of how big $d=1.9$ is, so let's make a visualization to get calibrated.
Here's a function that encapsulates the code we already saw for computing overlap and probability of superiority.
Step22: Here's the function that takes Cohen's $d$, plots normal distributions with the given effect size, and prints their overlap and superiority.
Step23: Here's an example that demonstrates the function
Step24: And an interactive widget you can use to visualize what different values of $d$ mean | Python Code:
%matplotlib inline
from __future__ import print_function, division
import numpy
import scipy.stats
import matplotlib.pyplot as pyplot
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# seed the random number generator so we all get the same results
numpy.random.seed(17)
# some nice colors from http://colorbrewer2.org/
COLOR1 = '#7fc97f'
COLOR2 = '#beaed4'
COLOR3 = '#fdc086'
COLOR4 = '#ffff99'
COLOR5 = '#386cb0'
Explanation: Effect Size
Examples and exercises for a tutorial on statistical inference.
Copyright 2016 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
mu1, sig1 = 178, 7.7
male_height = scipy.stats.norm(mu1, sig1)
mu2, sig2 = 163, 7.3
female_height = scipy.stats.norm(mu2, sig2)
Explanation: Part One
To explore statistics that quantify effect size, we'll look at the difference in height between men and women. I used data from the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the mean and standard deviation of height in cm for adult women and men in the U.S.
I'll use scipy.stats.norm to represent the distributions. The result is an rv object (which stands for random variable).
End of explanation
def eval_pdf(rv, num=4):
mean, std = rv.mean(), rv.std()
xs = numpy.linspace(mean - num*std, mean + num*std, 100)
ys = rv.pdf(xs)
return xs, ys
Explanation: The following function evaluates the normal (Gaussian) probability density function (PDF) within 4 standard deviations of the mean. It takes and rv object and returns a pair of NumPy arrays.
End of explanation
xs, ys = eval_pdf(male_height)
pyplot.plot(xs, ys, label='male', linewidth=4, color=COLOR2)
xs, ys = eval_pdf(female_height)
pyplot.plot(xs, ys, label='female', linewidth=4, color=COLOR3)
pyplot.xlabel('height (cm)')
None
Explanation: Here's what the two distributions look like.
End of explanation
male_sample = male_height.rvs(1000)
female_sample = female_height.rvs(1000)
Explanation: Let's assume for now that those are the true distributions for the population.
I'll use rvs to generate random samples from the population distributions. Note that these are totally random, totally representative samples, with no measurement error!
End of explanation
mean1, std1 = male_sample.mean(), male_sample.std()
mean1, std1
Explanation: Both samples are NumPy arrays. Now we can compute sample statistics like the mean and standard deviation.
End of explanation
mean2, std2 = female_sample.mean(), female_sample.std()
mean2, std2
Explanation: The sample mean is close to the population mean, but not exact, as expected.
End of explanation
difference_in_means = male_sample.mean() - female_sample.mean()
difference_in_means # in cm
Explanation: And the results are similar for the female sample.
Now, there are many ways to describe the magnitude of the difference between these distributions. An obvious one is the difference in the means:
End of explanation
# Solution goes here
relative_difference = difference_in_means / male_sample.mean()
print(relative_difference * 100) # percent
# A problem with relative differences is that you have to choose which mean to express them relative to.
relative_difference = difference_in_means / female_sample.mean()
print(relative_difference * 100) # percent
Explanation: On average, men are 14--15 centimeters taller. For some applications, that would be a good way to describe the difference, but there are a few problems:
Without knowing more about the distributions (like the standard deviations) it's hard to interpret whether a difference like 15 cm is a lot or not.
The magnitude of the difference depends on the units of measure, making it hard to compare across different studies.
There are a number of ways to quantify the difference between distributions. A simple option is to express the difference as a percentage of the mean.
Exercise 1: what is the relative difference in means, expressed as a percentage?
End of explanation
simple_thresh = (mean1 + mean2) / 2
simple_thresh
Explanation: STOP HERE: We'll regroup and discuss before you move on.
Part Two
An alternative way to express the difference between distributions is to see how much they overlap. To define overlap, we choose a threshold between the two means. The simple threshold is the midpoint between the means:
End of explanation
thresh = (std1 * mean2 + std2 * mean1) / (std1 + std2)
thresh
Explanation: A better, but slightly more complicated threshold is the place where the PDFs cross.
End of explanation
male_below_thresh = sum(male_sample < thresh)
male_below_thresh
Explanation: In this example, there's not much difference between the two thresholds.
Now we can count how many men are below the threshold:
End of explanation
female_above_thresh = sum(female_sample > thresh)
female_above_thresh
Explanation: And how many women are above it:
End of explanation
overlap = male_below_thresh / len(male_sample) + female_above_thresh / len(female_sample)
overlap
Explanation: The "overlap" is the total area under the curves that ends up on the wrong side of the threshold.
End of explanation
misclassification_rate = overlap / 2
misclassification_rate
Explanation: Or in more practical terms, you might report the fraction of people who would be misclassified if you tried to use height to guess sex:
End of explanation
# Solution goes here
sum(x > y for x, y in zip(male_sample, female_sample)) / len(male_sample)
Explanation: Another way to quantify the difference between distributions is what's called "probability of superiority", which is a problematic term, but in this context it's the probability that a randomly-chosen man is taller than a randomly-chosen woman.
Exercise 2: Suppose I choose a man and a woman at random. What is the probability that the man is taller?
End of explanation
def CohenEffectSize(group1, group2):
Compute Cohen's d.
group1: Series or NumPy array
group2: Series or NumPy array
returns: float
diff = group1.mean() - group2.mean()
n1, n2 = len(group1), len(group2)
var1 = group1.var()
var2 = group2.var()
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / numpy.sqrt(pooled_var)
return d
Explanation: Overlap (or misclassification rate) and "probability of superiority" have two good properties:
As probabilities, they don't depend on units of measure, so they are comparable between studies.
They are expressed in operational terms, so a reader has a sense of what practical effect the difference makes.
Cohen's d
There is one other common way to express the difference between distributions. Cohen's $d$ is the difference in means, standardized by dividing by the standard deviation. Here's a function that computes it:
End of explanation
CohenEffectSize(male_sample, female_sample)
Explanation: Computing the denominator is a little complicated; in fact, people have proposed several ways to do it. This implementation uses the "pooled standard deviation", which is a weighted average of the standard deviations of the two groups.
And here's the result for the difference in height between men and women.
End of explanation
def overlap_superiority(control, treatment, n=1000):
Estimates overlap and superiority based on a sample.
control: scipy.stats rv object
treatment: scipy.stats rv object
n: sample size
control_sample = control.rvs(n)
treatment_sample = treatment.rvs(n)
thresh = (control.mean() + treatment.mean()) / 2
control_above = sum(control_sample > thresh)
treatment_below = sum(treatment_sample < thresh)
overlap = (control_above + treatment_below) / n
superiority = sum(x > y for x, y in zip(treatment_sample, control_sample)) / n
return overlap, superiority
Explanation: Most people don't have a good sense of how big $d=1.9$ is, so let's make a visualization to get calibrated.
Here's a function that encapsulates the code we already saw for computing overlap and probability of superiority.
End of explanation
def plot_pdfs(cohen_d=2):
Plot PDFs for distributions that differ by some number of stds.
cohen_d: number of standard deviations between the means
control = scipy.stats.norm(0, 1)
treatment = scipy.stats.norm(cohen_d, 1)
xs, ys = eval_pdf(control)
pyplot.fill_between(xs, ys, label='control', color=COLOR3, alpha=0.7)
xs, ys = eval_pdf(treatment)
pyplot.fill_between(xs, ys, label='treatment', color=COLOR2, alpha=0.7)
o, s = overlap_superiority(control, treatment)
print('overlap', o)
print('superiority', s)
Explanation: Here's the function that takes Cohen's $d$, plots normal distributions with the given effect size, and prints their overlap and superiority.
End of explanation
plot_pdfs(2)
Explanation: Here's an example that demonstrates the function:
End of explanation
slider = widgets.FloatSlider(min=0, max=4, value=2)
interact(plot_pdfs, cohen_d=slider)
None
Explanation: And an interactive widget you can use to visualize what different values of $d$ mean:
End of explanation |
8,494 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this tutorial we examine the effect of changing the target(s) on the results of a horsetail matching optimization.
We'll use TP3 from the demo problems. We also define a function for easy plotting using matplotlib.
Step1: In the following code we setup a horsetail matching optimization using test problem 3, and then run optimizations under three targets | Python Code:
from horsetailmatching import HorsetailMatching, GaussianParameter
from horsetailmatching.demoproblems import TP3
from scipy.optimize import minimize
import numpy as np
import matplotlib.pyplot as plt
def plotHorsetail(theHM, c='b', label=''):
(q, h, t), _, _ = theHM.getHorsetail()
plt.plot(q, h, c=c, label=label)
plt.plot(t, h, c=c, linestyle='dashed')
plt.xlim([-10, 10])
Explanation: In this tutorial we examine the effect of changing the target(s) on the results of a horsetail matching optimization.
We'll use TP3 from the demo problems. We also define a function for easy plotting using matplotlib.
End of explanation
u1 = GaussianParameter()
def standardTarget(h):
return 0.
theHM = HorsetailMatching(TP3, u1, ftarget=standardTarget, samples_prob=5000)
solution1 = minimize(theHM.evalMetric, x0=0.6, method='COBYLA',
constraints=[{'type': 'ineq', 'fun': lambda x: x}, {'type': 'ineq', 'fun': lambda x: 1-x}])
theHM.evalMetric(solution1.x)
print(solution1)
plotHorsetail(theHM, c='b', label='Standard')
def riskAverseTarget(h):
return 0. - 3.*h**3.
theHM.ftarget=riskAverseTarget
solution2 = minimize(theHM.evalMetric, x0=0.6, method='COBYLA',
constraints=[{'type': 'ineq', 'fun': lambda x: x}, {'type': 'ineq', 'fun': lambda x: 1-x}])
theHM.evalMetric(solution2.x)
print(solution2)
plotHorsetail(theHM, c='g', label='Risk Averse')
def veryRiskAverseTarget(h):
return 1. - 10.**h**10.
theHM.ftarget=veryRiskAverseTarget
solution3 = minimize(theHM.evalMetric, x0=0.6, method='COBYLA',
constraints=[{'type': 'ineq', 'fun': lambda x: x}, {'type': 'ineq', 'fun': lambda x: 1-x}])
theHM.evalMetric(solution3.x)
print(solution3)
plotHorsetail(theHM, c='r', label='Very Risk Averse')
plt.xlim([-10, 5])
plt.ylim([0, 1])
plt.xlabel('Quantity of Interest')
plt.legend(loc='lower left')
plt.plot()
plt.show()
Explanation: In the following code we setup a horsetail matching optimization using test problem 3, and then run optimizations under three targets: a standard target, a risk averse target, and a very risk averse target.
End of explanation |
8,495 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture notes from the fourth week¶
Programming for the Behavioral Sciences
A large part of running behavioural experiments concerns the preparation of stimuli, i.e., what you have your participants looking at. The goal of this week is to create stimuli for a visual search experiment where participants search for a target object among distractors (non-targets that distract you from finding the target). We want to create a stimulus image where we flexibly can control the background color of the image as well as the the color, shape, and size of the target and distractors. An example stimuli is shown here; the red triangle is the target and the blue dots are the distractors
Step1: Two problem are visible
* The distractors overlap
* Parts of a distractor can be outside of the plot
One way to solve this is ensure that the distractors are always separated by a large enough distance to other distractors and to the image border.
Step2: New problem. Seems like only the x-, and y-, coordinates of the grid elements were defined, but not the locations for ALL grid elements. How can this be done?
Step3: Now we know where distractors can be placed. But we don't want to put a distractor at each grid position, but draw a number of them (say 10) at random. One way to do this is the 'shuffle' the array, and then select the 10 first elements.
Step4: Dictionaries
In the assigment, dictionaries will be used as containers of information about the background, target, and distractors. A dictionary is just like it sounds; given a key (-word), it returns whatever is behind the door the key opens (a number, string, or any other python object).
Step5: In this assignment, the dictionaries contain information about the visual search images | Python Code:
import numpy as np
import matplotlib.pyplot as plt
# A first attempt (we ignore the target for now)
image_size = (1280, 1024) # Size of background in pixels
nDistractors = 10 # Number of distractors
distractor_size = 500
# Generate positions where to put the distractors
xr = np.random.randint(0, image_size[0], nDistractors)
yr = np.random.randint(0, image_size[1], nDistractors)
plt.scatter(xr, yr, s=distractor_size ,c='b',marker='v')
plt.axis([0, image_size[0], 0, image_size[1]])
plt.show()
Explanation: Lecture notes from the fourth week¶
Programming for the Behavioral Sciences
A large part of running behavioural experiments concerns the preparation of stimuli, i.e., what you have your participants looking at. The goal of this week is to create stimuli for a visual search experiment where participants search for a target object among distractors (non-targets that distract you from finding the target). We want to create a stimulus image where we flexibly can control the background color of the image as well as the the color, shape, and size of the target and distractors. An example stimuli is shown here; the red triangle is the target and the blue dots are the distractors:
<img src="img\stimulus.png" alt="Stimulus" style="width:304px;height:228px;">
This week, we will use Matplotlib to generate the images. Next week PsychoPy will be used to accomplish the same task. The rest of the lectures in this course will be devoted to implement central parts of the experimental process in a visual search experiment: create stimuli, record data, and plot and analyze data.
Introduction to this week's exercise
So what do we need to know before we can start building the stimuli?
Information about the background (size, color)
Information about the target (position, shape, color)
Information about the distractors (positions, shape, color)
End of explanation
# Divide the plot into a 10 x 8 grid, and allow only one distractor in each grid
image_size = [1280, 1024]
grid_size = [10, 8]
grid_size_pixels_x = image_size[0] / grid_size[0]
grid_size_pixels_y = image_size[1] / grid_size[1]
x_c = np.arange(grid_size_pixels_x / 2.0, image_size[0], grid_size_pixels_x)
y_c = np.arange(grid_size_pixels_y / 2.0, image_size[1], grid_size_pixels_y)
# Plot the positions of the new grid
xx = np.ones(len(x_c))
yy = np.ones(len(y_c))
plt.plot(x_c, xx, 'ro')
plt.plot(yy, y_c, 'bo')
# plt.axis([0, image_size[0], 0, image_size[1]])
plt.show()
Explanation: Two problem are visible
* The distractors overlap
* Parts of a distractor can be outside of the plot
One way to solve this is ensure that the distractors are always separated by a large enough distance to other distractors and to the image border.
End of explanation
# Meshgrid creats the whole grid (you could also use a double for-)
x_all, y_all = np.meshgrid(x_c, y_c)
# Reshape the positions into a N x 2 array (N rows, 2 columns), to make it easier to work with later
xy_all = np.vstack((x_all.flatten(), y_all.flatten())).T
# Plot all grid elements
plt.figure()
plt.plot(xy_all[:, 0], xy_all[:, 1], 'g+')
plt.show()
Explanation: New problem. Seems like only the x-, and y-, coordinates of the grid elements were defined, but not the locations for ALL grid elements. How can this be done?
End of explanation
import time # Used to animate below
nSelect = 10
# Randomly change the positions of the locations in the array
np.random.shuffle(xy_all)
# Plot the result (looks much better!)
plt.scatter(xy_all[:nSelect, 0], xy_all[:nSelect, 1], s=distractor_size ,c='b',marker='v')
plt.axis([0, image_size[0], 0, image_size[1]])
plt.show()
Explanation: Now we know where distractors can be placed. But we don't want to put a distractor at each grid position, but draw a number of them (say 10) at random. One way to do this is the 'shuffle' the array, and then select the 10 first elements.
End of explanation
# Example of how dictionaries are defined...
d1 = {'key1': 4, 'key2': 'my_value2'}
#... and how the values are accessed from them
print(d1['key2'])
# Unlike lists and arrays, variables in dictionaries are not ordered, so you can't do, e.g.,
# print(d1[0])
Explanation: Dictionaries
In the assigment, dictionaries will be used as containers of information about the background, target, and distractors. A dictionary is just like it sounds; given a key (-word), it returns whatever is behind the door the key opens (a number, string, or any other python object).
End of explanation
# Specify the size and color of the background. Use a dictionary
background = {'size':np.array([1280, 1024]),'color':0.5} # zero - black, 1 - white
# Specify the target
target = {'shape':'^', 'size':10, 'color':'r', 'face_color':'r'}
# Specify the distractors
distractor = {'shape':'o', 'size':10, 'color':'b', 'number_of':10}
# Test prints
print(background['color'], distractor['size'])
Explanation: In this assignment, the dictionaries contain information about the visual search images
End of explanation |
8,496 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSV command-line kung fu
You might be surprised how much data slicing and dicing you can do from the command line using some simple tools and I/O redirection + piping. (See A Quick Introduction to Pipes and Redirection). We've already seen I/O redirection where we took the output of a command and wrote it to a file (/tmp/t.csv)
Step1: We didn't have to write any code. We didn't have to jump into a development environment or editor. We just asked for the sales for Annie. If we want to write the data to a file, we simply redirect it
Step2: Filtering with csvgrep
csvkit is an amazing package with lots of cool CSV utilities for use on the command line. csvgrep is one of them.
If we want a specific row ID, then we need to use the more powerful csvgrep not just grep. We use a different regular expression that looks for a specific string at the left edge of a line (^ means the beginning of a line, $ means end of line or end of record)
Step3: What if you want, say, two different rows added to another file? We do two greps and a >> concatenation redirection
Step4: Beginning, end of files
If we'd like to see just the header row, we can use head
Step5: If, on the other hand, we want to see everything but that row, we can use tail (which I pipe to head so then I see only the first two lines of output)
Step6: The output would normally be many thousands of lines here so I have piped the output to the head command to print just the first two rows. We can pipe many commands together, sending the output of one command as input to the next command.
Exercise
Count how many sales items there are in the Technology product category that are also High order priorities? Hint
Step7: Extracting columns with csvcut
Extracting columns is also pretty easy with csvcut. For example, let's say we wanted to get the customer name column (which is 12th by my count).
Step8: Actually, hang on a second. We don't want the Customer Name header to appear in the list so we combine with the tail we just saw to strip the header.
Step9: What if we want a unique list? All we have to do is sort and then call uniq
Step10: You can get multiple columns at once in the order specified. For example, here is how to get the sales ID and the customer name together (name first then ID)
Step11: Naturally, we can write any of this output to a file using the > redirection operator. Let's do that and put each of those columns into a separate file and then paste them back with the customer name first.
Step12: Amazing, right?! This is often a very efficient means of manipulating data files because you are directly talking to the operating system instead of through Python libraries. We also don't have to write any code, we just have to know some syntax for terminal commands.
Not impressed yet? Ok, how about creating a histogram indicating the number of sales per customer sorted in reverse numerical order? We've already got the list of customers and we can use an argument on uniq to get the count instead of just making a unique set. Then, we can use a second sort with arguments to reverse sort and use numeric rather than text-based sorting. This gives us a histogram
Step13: Exercise
Modify the command so that you get a histogram of the shipping mode. | Python Code:
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv | head -3
Explanation: CSV command-line kung fu
You might be surprised how much data slicing and dicing you can do from the command line using some simple tools and I/O redirection + piping. (See A Quick Introduction to Pipes and Redirection). We've already seen I/O redirection where we took the output of a command and wrote it to a file (/tmp/t.csv):
bash
$ iconv -c -f utf-8 -t ascii SampleSuperstoreSales.csv > /tmp/t.csv
Set up
bash
pip install csvkit
Extracting rows with grep
Now, let me introduce you to the grep command that lets us filter the lines in a file according to a regular expression. Here's how to find all rows that contain Annie Cyprus:
End of explanation
! grep 'Annie Cyprus' data/SampleSuperstoreSales.csv > /tmp/Annie.csv
! head -3 /tmp/Annie.csv # show first 3 lines of that new file
Explanation: We didn't have to write any code. We didn't have to jump into a development environment or editor. We just asked for the sales for Annie. If we want to write the data to a file, we simply redirect it:
End of explanation
! csvgrep -c 1 -r '^80$' -e latin1 data/SampleSuperstoreSales.csv
Explanation: Filtering with csvgrep
csvkit is an amazing package with lots of cool CSV utilities for use on the command line. csvgrep is one of them.
If we want a specific row ID, then we need to use the more powerful csvgrep not just grep. We use a different regular expression that looks for a specific string at the left edge of a line (^ means the beginning of a line, $ means end of line or end of record):
End of explanation
! csvgrep -c 1 -r '^80$' -e latin1 data/SampleSuperstoreSales.csv > /tmp/two.csv # write first row
! csvgrep -c 1 -r '^160$' -e latin1 data/SampleSuperstoreSales.csv >> /tmp/two.csv # append second row
! cat /tmp/two.csv
Explanation: What if you want, say, two different rows added to another file? We do two greps and a >> concatenation redirection:
End of explanation
! head -1 data/SampleSuperstoreSales.csv
Explanation: Beginning, end of files
If we'd like to see just the header row, we can use head:
End of explanation
! tail +2 data/SampleSuperstoreSales.csv | head -2
Explanation: If, on the other hand, we want to see everything but that row, we can use tail (which I pipe to head so then I see only the first two lines of output):
End of explanation
! grep Technology, data/SampleSuperstoreSales.csv | grep High, | wc -l
Explanation: The output would normally be many thousands of lines here so I have piped the output to the head command to print just the first two rows. We can pipe many commands together, sending the output of one command as input to the next command.
Exercise
Count how many sales items there are in the Technology product category that are also High order priorities? Hint: wc -l counts the number of lines.
End of explanation
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | head -10
Explanation: Extracting columns with csvcut
Extracting columns is also pretty easy with csvcut. For example, let's say we wanted to get the customer name column (which is 12th by my count).
End of explanation
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | head -10
Explanation: Actually, hang on a second. We don't want the Customer Name header to appear in the list so we combine with the tail we just saw to strip the header.
End of explanation
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq | head -10
Explanation: What if we want a unique list? All we have to do is sort and then call uniq:
End of explanation
! csvcut -c 12,2 -e latin1 data/SampleSuperstoreSales.csv |head -10
Explanation: You can get multiple columns at once in the order specified. For example, here is how to get the sales ID and the customer name together (name first then ID):
End of explanation
! csvcut -c 2 -e latin1 data/SampleSuperstoreSales.csv > /tmp/IDs
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv > /tmp/names
! paste /tmp/names /tmp/IDs | head -10
Explanation: Naturally, we can write any of this output to a file using the > redirection operator. Let's do that and put each of those columns into a separate file and then paste them back with the customer name first.
End of explanation
! csvcut -c 12 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
Explanation: Amazing, right?! This is often a very efficient means of manipulating data files because you are directly talking to the operating system instead of through Python libraries. We also don't have to write any code, we just have to know some syntax for terminal commands.
Not impressed yet? Ok, how about creating a histogram indicating the number of sales per customer sorted in reverse numerical order? We've already got the list of customers and we can use an argument on uniq to get the count instead of just making a unique set. Then, we can use a second sort with arguments to reverse sort and use numeric rather than text-based sorting. This gives us a histogram:
End of explanation
! csvcut -c 8 -e latin1 data/SampleSuperstoreSales.csv | tail +2 | sort | uniq -c | sort -r -n | head -10
Explanation: Exercise
Modify the command so that you get a histogram of the shipping mode.
End of explanation |
8,497 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
High-performance simulations with TFF
This tutorial will describe how to setup high-performance simulations with TFF
in a variety of common scenarios.
TODO(b/134543154)
Step1: 단일 머신 시뮬레이션
다음은 기본입니다. | Python Code:
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).shuffle(500).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
element_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for _ in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('metrics {m}, round time {t:.2f} seconds'.format(
m=metrics, t=t2 - t1))
Explanation: High-performance simulations with TFF
This tutorial will describe how to setup high-performance simulations with TFF
in a variety of common scenarios.
TODO(b/134543154): Populate the content, some of the things to cover here:
- using GPUs in a single-machine setup,
- multi-machine setup on GCP/GKE, with and without TPUs,
- interfacing MapReduce-like backends,
- current limitations and when/how they will be relaxed.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/simulations"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/federated/tutorials/simulations.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/federated/tutorials/simulations.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/federated/tutorials/simulations.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Before we begin
First, make sure your notebook is connected to a backend that has the relevant components (including gRPC dependencies for multi-machine scenarios) compiled.
이제 TFF 웹 사이트에서 MNIST 예제를 로드하고 10개의 클라이언트 그룹에 대해 작은 실험 루프를 실행할 Python 함수를 선언하는 것으로 시작하겠습니다.
End of explanation
evaluate()
Explanation: 단일 머신 시뮬레이션
다음은 기본입니다.
End of explanation |
8,498 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Regression
Let's fabricate some data that shows a roughly linear relationship between page speed and amount purchased
Step1: As we only have two features, we can keep it simple and just use scipy.state.linregress
Step2: Not surprisngly, our R-squared value shows a really good fit
Step3: Let's use the slope and intercept we got from the regression to plot predicted values vs. observed | Python Code:
%matplotlib inline
import numpy as np
from pylab import *
pageSpeeds = np.random.normal(3.0, 1.0, 1000)
purchaseAmount = 100 - (pageSpeeds + np.random.normal(0, 0.1, 1000)) * 3
scatter(pageSpeeds, purchaseAmount)
Explanation: Linear Regression
Let's fabricate some data that shows a roughly linear relationship between page speed and amount purchased:
End of explanation
from scipy import stats
slope, intercept, r_value, p_value, std_err = stats.linregress(pageSpeeds, purchaseAmount)
Explanation: As we only have two features, we can keep it simple and just use scipy.state.linregress:
End of explanation
r_value ** 2
Explanation: Not surprisngly, our R-squared value shows a really good fit:
End of explanation
import matplotlib.pyplot as plt
def predict(x):
return slope * x + intercept
fitLine = predict(pageSpeeds)
plt.scatter(pageSpeeds, purchaseAmount)
plt.plot(pageSpeeds, fitLine, c='r')
plt.show()
Explanation: Let's use the slope and intercept we got from the regression to plot predicted values vs. observed:
End of explanation |
8,499 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-9s', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MIROC
Source ID: NICAM16-9S
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.