Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
1,300
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-3', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
1,301
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import numpy as np
import tensorflow_datasets as tfds
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(16, activation='relu', input_shape=(4,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(3)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam')
model.summary()
def input_fn():
split = tfds.Split.TRAIN
dataset = tfds.load('iris', split=split, as_supervised=True)
dataset = dataset.map(lambda features, labels: ({'dense_input':features}, labels))
dataset = dataset.batch(32).repeat()
return dataset
for features_batch, labels_batch in input_fn().take(1):
print(features_batch)
print(labels_batch)
import tempfile
model_dir = tempfile.mkdtemp()
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model, model_dir=model_dir)
keras_estimator.train(input_fn=input_fn, steps=500)
eval_result = keras_estimator.evaluate(input_fn=input_fn, steps=10)
print('Eval result: {}'.format(eval_result))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Keras モデルから Estimator を作成する
Step2: 単純な Keras モデルを作成する。
Step3: モデルをコンパイルして要約を取得します。
Step4: 入力関数を作成する
Step5: input_fn をテストします。
Step6: tf.keras モデルから Estimator を作成する。
Step7: Estimator をトレーニングして評価します。
|
1,302
|
<ASSISTANT_TASK:>
Python Code:
# Exécutez cette cellule !
from IPython.core.display import HTML
styles = "<style>\n.travail {\n background-size: 30px;\n background-image: url('https://cdn.pixabay.com/photo/2018/01/04/16/53/building-3061124_960_720.png');\n background-position: left top;\n background-repeat: no-repeat;\n padding-left: 40px;\n}\n\n.bilan {\n background-size: 30px;\n background-image: url('https://cdn.pixabay.com/photo/2016/10/18/19/40/anatomy-1751201_960_720.png');\n background-position: left top;\n background-repeat: no-repeat;\n padding-left: 40px;\n}\n</style>"
HTML(styles)
print("du texte")
a, b = "variables", "des" # saviez-vous qu'on peut affecter simultanément plusieurs variables ?
print(b, a)
h = 12
m = 8
print("il est exactement", h, "heures et", m, "minutes")
print("Cette ligne ", end="")
print("est écrite dans ", end="")
print("plusieurs prints")
b = 2
s = f"il a mangé {b} beignets"
print(s)
easy = "simple"
read = "lire"
write = "écrire"
print(f"{easy} à {read}, {easy} à {write}")
pr = 12
print(f"Ce truc coûte {pr}€ mais on va plutôt afficher {pr:.2f}€.")
print(f"On peut aussi faire des calculs, comme {345 + 54 * 7}")
nom1, score1 = "jack", 5
nom2, score2 = "jo", 17
nom3, score3 = "averell", 0
a = "doubi"
b = "douwaaaa \n"
a = input("tape un truc: ")
print(f"tu as tapé '{a}' !")
n = int(input("Un nombre ?"))
print(f"votre nombre est {n} et le suivant est {n+1}") # si n est converti en entier, l'addition doit fonctionner
a = int(input(""))
b = int(input(""))
c = int(input(""))
n = int(input("Tapez un positif:"))
if n < 0:
print("C'est malin...") # cette instruction est dans le if
print("J'avais dit un positif!") # celle-ci également
print(f"Votre nombre est {n}.") # mais pas celle-ci
n = int(input("Tapez un chiffre, cette fois:"))
if not(0 <= n <= 9): # si n n'est pas un chiffre
print("C'est malin...")
print("J'avais dit un chiffre !")
print(f"Vous avez tapé {n}.")
rep = input("On fait une petite pause? (o/n)")
if rep == "o":
print("Ca me semble raisonnable.")
elif rep == "n": # ne sera exécuté que si la condition précédente était fausse
print("J'en vois qui tournent de l'oeil.")
print("Allez, pause !")
rep = input("On fait une petite pause? (o/n)")
if rep == "o":
print("Ca me semble raisonnable.")
elif rep == "n":
print("J'en vois qui tournent de l'oeil.")
else: # sera exécuté si toutes les conditions précédentes sont fausses
print("Réponse inintelligible, la fatigue certainement.")
print("Allez, pause. Un café sans sucre pour moi.")
rep = input("Besoin de faire une pause? (o/n)")
while rep != "n":
rep = input("Sérieusement, vraiment besoin d'une pause? (o/n)")
print("Allez on bosse.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dans les séquences de travail, vous rencontrerez certains logos
Step2: On peut aussi mêler texte et variables, de la façon suivante
Step3: On peut aussi specifier le caractère de fin de ligne. Ce sera utile pour écrire sur la même ligne avec plusieurs print différents
Step4: <div class="alert alert-block alert-danger travail">
Step5: Le préfixe f en amont d'une chaîne indique qu'on va formater la chaîne.
Step6: <div class="alert alert-block alert-info bilan">
Step7: <div class="alert alert-block alert-danger travail">
Step8: <div class="alert alert-block alert-info bilan">
Step9: <div class="alert alert-block alert-danger travail">
Step10: <div class="alert alert-block alert-info bilan">
Step11: Structures conditionnelles
Step12: <div class="alert alert-block alert-info bilan">
Step13: <div class="alert alert-block alert-danger travail">
Step14: Dans une structure conditionnelle, on peut aussi prévoir un cas si toutes les conditions précédentes ont échouées
Step15: <div class="alert alert-block alert-info bilan">
|
1,303
|
<ASSISTANT_TASK:>
Python Code:
from bs4 import BeautifulSoup
import requests
import pandas as pd
website_to_parse = "https://csvconf.com/speakers/"
# Save HTML to soup
html_data = requests.get(website_to_parse).text
soup = BeautifulSoup(html_data, "html5lib")
doc = soup.find_all("table", attrs={"class", "speakers"})[1]
names = doc.find_all("span", attrs={"class": "name"})
names = [t.getText().strip() for t in names]
titles = doc.find_all("p", attrs={"class": "title"})
titles = [t.getText().strip() for t in titles]
abstracts = doc.find_all("p", attrs={"class": "abstract"})
abstracts = [t.getText().strip() for t in abstracts]
print(len(names), len(titles), len(abstracts))
df = pd.DataFrame.from_dict({
'names':names,
'titles':titles,
'abstracts':abstracts})
# Combine text of title and abstract
df['document'] = df['titles'] + " " + df['abstracts']
# Add index
df['index'] = df.index
import sys
sys.path.append("/Users/csiu/repo/kick/src/python")
import sim_doc as sim_doc
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.utils.extmath import randomized_svd
## Preprocess
_ = sim_doc.preprocess_data(df)
## TF-IDF
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(df['doc_processed'])
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
# generate the linkage matrix
Z = linkage(X.toarray(), 'ward')
# calculate full dendrogram
plt.figure(figsize=(25, 4))
plt.title('Hierarchical Clustering of CSV,Conf,V3 Non-Keynote talks')
plt.xlabel('')
plt.ylabel('Distance')
dn = dendrogram(
Z,
leaf_rotation=270, # rotates the x axis labels
leaf_font_size=12, # font size for the x axis labels
labels = df["titles"].tolist(),
color_threshold=1.45, # where to cut for clusters
above_threshold_color='#bcbddc'
)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document representation
Step2: Preprocess text
Step3: Cluster the talks
|
1,304
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import requests
import json
%matplotlib notebook
politicianFile = 'PolbyCountry_data.csv'
politicianNames = pd.read_csv(politicianFile)
# rename variables
politicianNames.rename(columns = {'page':'article_name'}, inplace = True)
politicianNames.rename(columns = {'last_edit':'revision_id'}, inplace = True)
politicianNames[0:4]
politicianNames.shape
countryFile = 'Population Mid-2015.csv'
tempDF = pd.read_csv(countryFile, header=1)
countryPop = pd.DataFrame(data={'country': tempDF['Location'], 'population': tempDF['Data']})
countryPop[0:5]
# First add placeholder to politicianNames dataframe for article quality
politicianNames = politicianNames.assign(article_quality = "")
article_quality = politicianNames['article_quality']
# Next, join politicianNames with countryPop
politicData = politicianNames.merge(countryPop,how = 'inner')
#politicianNames[0:5]
politicData[0:5]
politicData.shape
# ORES
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/{revid}/{model}'
headers = {'User-Agent' : 'https://github.com/your_github_username', 'From' : 'your_uw_email@uw.edu'}
for irevid in range(0, politicData.shape[0]):
revidstr = str(politicData['revision_id'][irevid])
#print(revidstr)
params = {'project' : 'enwiki',
'model' : 'wp10',
'revid' : revidstr
}
try:
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
#print(json.dumps(response, indent=4, sort_keys=True))
# Create data frame and add numeric values for the plotting variable
politicData.loc[irevid,'article_quality'] = response['enwiki']['scores'][revidstr]['wp10']['score']['prediction']
#print(response['enwiki']['scores'][revidstr]['wp10']['score']['prediction'])
except:
print('Error at ' + str(irevid))
if irevid % 500 == 0:
print(irevid)
# Write out csv file
politicData.to_csv('en-wikipedia_bias_2015.csv', index=False)
politicData[0:4]
politicData.shape[0]
#politicData[-5:]
## getting the data from the CSV files
import csv
data = []
with open('page_data.csv') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
data.append([row[0],row[1],row[2]])
print(data[782])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import data of politicians by country
Step2: Import population by country
Step3: Combined data
Step4: ORES article quality data
Step5: Importing the other data is just a matter of reading CSV files in! (and for the R programmers - we'll have an R example up as soon as the Hub supports the language).
|
1,305
|
<ASSISTANT_TASK:>
Python Code:
max_num("9512983", 1) # "9"
max_num("9512983", 3) # "998"
max_num("9512983", 7) # "9512983"
POST = {
u"page[1][1]['id']": [u'baloes_bd_8_1'],
u"page[0][1]['text']": [u'Mum, dad! Look, the school email. '],
u"page[1][0]['id']": [u'baloes_bd_9_1'],
u"page[0][1]['id']": [u'baloes_bd_6_1'],
u"page[0][0]['id']": [u'baloes_bd_5_1'],
u'next': [u'/mycontent/5910974510923776'],
u"page[0][0]['text']": [u'Some time later\u2026'],
u'space_id': [u'5910974510923776'],
u"page[1][0]['text']": [u'You open the email, Luana. \u2028It\u2019s for you!'
],
u"page[1][1]['text']": [u'Me too!'],
u'skip_editor': [u'1'],
}
[
[u'Some time later\u2026', u'Mum, dad! Look, the school email.'],
[u'You open the email, Luana. \u2028It\u2019s for you!', u'Me too!']
]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <center><h3>2. Mając dane w takiej postaci, gdzie pierwszy index to numer strony, a drugi to numer zawartości.</center></h3>
Step2: Stwórz listę stron, która będzie zawierała kolejne zawartości 'text'.
|
1,306
|
<ASSISTANT_TASK:>
Python Code:
DT_FORMAT_STR = "%Y-%m-%dT%H:%M:%S.%f"
def stream_of_tweets(n=10):
# generator function to generate sequential tweets
for i in range(n):
time.sleep(0.01)
tweet = {
'body':'I am tweet #' + str(i),
'postedTime':datetime.datetime.now().strftime(DT_FORMAT_STR)
}
yield json.dumps(tweet)
for tweet in stream_of_tweets(2):
print(tweet)
for tweet in stream_of_tweets():
print(tweet)
def enrich_tweets_1(istream,enrichment_class_list):
simplified copy of tweet_enricher.py
enrichment_instance_list = [enrichment_class() for enrichment_class in enrichment_class_list]
for tweet_str in istream:
try:
tweet = json.loads(tweet_str)
except ValueError:
continue
for instance in enrichment_instance_list:
instance.enrich(tweet)
sys.stdout.write( json.dumps(tweet) + '\n')
class TestEnrichment():
value = 42
def enrich(self,tweet):
if 'enrichments' not in tweet:
tweet['enrichments'] = {}
tweet['enrichments']['TestEnrichment'] = self.value
class TestEnrichment2():
value = 48
def enrich(self,tweet):
if 'enrichments' not in tweet:
tweet['enrichments'] = {}
tweet['enrichments']['TestEnrichment2'] = self.value
enrich_tweets_1(stream_of_tweets(5),[TestEnrichment,TestEnrichment2])
DT_FORMAT_STR = "%Y-%m-%dT%H:%M:%S.%f"
def stream_of_tweets(n=10):
# generator function to generate sequential tweets
for i in range(n):
time.sleep(0.01)
tweet = {
'body':'I am tweet #' + str(i),
'postedTime':datetime.datetime.now().strftime(DT_FORMAT_STR)
}
yield tweet # <<-- this is the only change from above
class EnrichmentBase():
def enrich(self,tweet):
if 'enrichments' not in tweet:
tweet['enrichments'] = {}
tweet['enrichments'][type(self).__name__] = self.enrichment_value(tweet)
class TestEnrichment(EnrichmentBase):
def enrichment_value(self,tweet):
return 42
def enrich_tweets_2(istream,enrichment_class,**kwargs):
simplify `enrich_tweets_1 :
only one enrichment
generator function
leave tweets as dict objects
enrichment_instance = enrichment_class()
for tweet in istream:
enrichment_instance.enrich(tweet)
sys.stdout.write( str(tweet) + '\n')
%%time
enrich_tweets_2(
istream=stream_of_tweets(5),
enrichment_class=TestEnrichment
)
class SlowEnrichment(EnrichmentBase):
def enrichment_value(self,tweet):
# get the tweet number from body
# and sleep accordingly
seconds = int(tweet['body'][-1]) + 1
time.sleep(seconds)
return str(seconds) + ' second nap'
%%time
enrich_tweets_2(
istream=stream_of_tweets(5),
enrichment_class=SlowEnrichment
)
import threading
def enrich_tweets_3(istream,enrichment_class):
use threads to run `enrich`
enrichment_instance = enrichment_class()
# we need to hang onto the threads spawned
threads = []
# ...and the tweets
enriched_tweets = []
for tweet in istream:
# run `enrich` in a new thread
thread = threading.Thread(
target=enrichment_instance.enrich,
args=(tweet,)
)
thread.start() # runs the function in a new thread
threads.append(thread)
enriched_tweets.append(tweet)
sys.stderr.write('submitted all tweets to threads' + '\n')
for thread in threads:
thread.join() # blocks until thread finishes
sys.stderr.write('all threads finished' + '\n')
for enriched_tweet in enriched_tweets:
sys.stdout.write( str(enriched_tweet) + '\n')
%%time
enrich_tweets_3(
istream=stream_of_tweets(5),
enrichment_class=SlowEnrichment
)
def enrich_tweets_4(istream,enrichment_class,**kwargs):
better use of threads
enrichment_instance = enrichment_class()
queue = [] # queue of (thread,tweet) tuples
max_threads = kwargs['max_threads']
for tweet in istream:
# run `enrich` in a new thread
thread = threading.Thread(
target=enrichment_instance.enrich,
args=(tweet,)
)
thread.start()
queue.append((thread,tweet))
# don't accept more tweets until a thread is free
while len(queue) >= max_threads:
# iterate through all threads
# when threads are dead, remove from queue and yield tweet
new_queue = []
for thread,tweet in queue:
if thread.is_alive():
new_queue.append((thread,tweet))
else:
sys.stdout.write( str(tweet) + '\n') # print enriched tweet
queue = new_queue
time.sleep(0.1)
sys.stderr.write('submitted all tweets to threads' + '\n')
# cleanup threads that didn't finish while iterating through tweets
for thread,tweet in queue:
thread.join()
time.sleep(0.01)
sys.stdout.write( str(tweet) + '\n')
%%time
enrich_tweets_4(
istream=stream_of_tweets(5),
enrichment_class=SlowEnrichment,
max_threads = 1 # play with this number
)
from concurrent import futures
def enrich_tweets_5(istream,enrichment_class,**kwargs):
use concurrent.futures instead of bare Threads
enrichment_instance = enrichment_class()
with futures.ThreadPoolExecutor(max_workers=kwargs['max_workers']) as executor:
future_to_tweet = {}
for tweet in istream:
# run `enrich` in a new thread, via a Future
future = executor.submit(
enrichment_instance.enrich,
tweet
)
future_to_tweet[future] = tweet
sys.stderr.write('submitted all tweets as futures' + '\n')
for future in futures.as_completed(future_to_tweet):
sys.stdout.write( str(future_to_tweet[future]) + '\n')
%%time
enrich_tweets_5(
istream=stream_of_tweets(5),
enrichment_class=SlowEnrichment,
max_workers = 5
)
class NewEnrichmentBase():
def enrich(self,tweet):
if 'enrichments' not in tweet:
tweet['enrichments'] = {}
tweet['enrichments'][type(self).__name__] = self.enrichment_value(tweet)
return tweet # <<-- the only new piece
class NewSlowEnrichment(NewEnrichmentBase):
def enrichment_value(self,tweet):
# get the tweet number from body
# and sleep accordingly
seconds = int(tweet['body'].split('#')[-1]) + 1
if seconds > 9:
seconds = 1
time.sleep(seconds)
return str(seconds) + ' second nap'
from concurrent import futures
def enrich_tweets_6(istream,enrichment_class,**kwargs):
new enrichment protocol
enrichment_instance = enrichment_class()
with futures.ThreadPoolExecutor(max_workers=kwargs['max_workers']) as executor:
futures_list = [] # <<-- this is now just a list of futures
for tweet in istream:
# run `enrich` in a new thread, via a Future
future = executor.submit(
enrichment_instance.enrich,
tweet
)
futures_list.append(future)
sys.stderr.write('submitted all tweets as futures' + '\n')
for future in futures.as_completed(futures_list):
sys.stdout.write( str(future.result()) + '\n')
%%time
enrich_tweets_6(
istream=stream_of_tweets(5),
enrichment_class=NewSlowEnrichment,
max_workers = 5
)
from concurrent import futures
def enrich_tweets_7(istream,enrichment_class,**kwargs):
def print_the_tweet(future):
sys.stdout.write( str(future.result()) + '\n')
enrichment_instance = enrichment_class()
with futures.ThreadPoolExecutor(max_workers=kwargs['max_workers']) as executor:
for tweet in istream:
# run `enrich` in a new thread, via a Future
future = executor.submit(
enrichment_instance.enrich,
tweet
)
future.add_done_callback(print_the_tweet)
sys.stderr.write('submitted all tweets as futures' + '\n')
%%time
enrich_tweets_7(
istream=stream_of_tweets(5),
enrichment_class=NewSlowEnrichment,
max_workers = 5
)
from concurrent import futures
def enrich_tweets_8(istream,enrichment_class,**kwargs):
max_workers = kwargs['max_workers']
def print_the_tweet(future):
sys.stdout.write( str(future.result()) + '\n')
enrichment_instance = enrichment_class()
with futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
futures_list = []
for tweet in istream:
# run `enrich` in a new thread, via a Future
future = executor.submit(
enrichment_instance.enrich,
tweet
)
future.add_done_callback(print_the_tweet)
futures_list.append(future)
futures_list[:] = [future for future in futures_list if future.running()]
while len(futures_list) >= max_workers:
futures_list[:] = [future for future in futures_list if future.running()]
time.sleep(0.5)
sys.stderr.write('submitted all tweets as futures' + '\n')
%%time
enrich_tweets_8(
istream=stream_of_tweets(50),
enrichment_class=NewSlowEnrichment,
max_workers = 5
)
import queue
def enrich_tweets_9(istream,enrichment_class,**kwargs):
use a pool of threads, each running a worker reading from a common queue
max_workers = kwargs['max_workers']
queue_size = kwargs['queue_size']
enrichment_instance = enrichment_class()
def worker():
this function runs on new threads
and reads from a common queue
time.sleep(0.5)
while True:
tweet = q.get()
if tweet is None: # this is the signal to exit
break
enriched_tweet = enrichment_instance.enrich(tweet)
sys.stdout.write(str(enriched_tweet) + '\n')
q.task_done()
time.sleep(0.1)
thread_pool = [threading.Thread(target=worker) for _ in range(max_workers)]
[thread.start() for thread in thread_pool]
q = queue.Queue(maxsize=queue_size)
for tweet in istream:
q.put(tweet)
sys.stderr.write('submitted all tweets to threads' + '\n')
# block until queue is empty
q.join()
# kill the threads
for _ in range(len(thread_pool)):
q.put(None)
for thread in thread_pool:
thread.join()
%%time
enrich_tweets_9(
istream=stream_of_tweets(10),
enrichment_class=NewSlowEnrichment,
max_workers = 1,
queue_size=5
)
from random import randrange
import hashlib
class CPUBoundEnrichment(NewEnrichmentBase):
def enrichment_value(self,tweet):
# make a SHA-256 hash of random byte arrays
data = bytearray(randrange(256) for i in range(2**21))
algo = hashlib.new('sha256')
algo.update(data)
return algo.hexdigest()
def enrich_tweets_10(istream,enrichment_class,**kwargs):
use a `ProcessPoolExecutor` to manage processes
max_workers=kwargs['max_workers']
executor_name=kwargs['executor_name']
def print_the_tweet(future):
sys.stdout.write( str(future.result()) + '\n')
enrichment_instance = enrichment_class()
with getattr(futures,executor_name)(max_workers=max_workers) as executor: # <- this is the only change from #8
futures_list = []
for tweet in istream:
# run `enrich` in a new thread, via a Future
future = executor.submit(
enrichment_instance.enrich,
tweet
)
future.add_done_callback(print_the_tweet)
futures_list.append(future)
# have to throttle with this hack
futures_list[:] = [future for future in futures_list if future.running()]
while len(futures_list) >= max_workers:
futures_list[:] = [future for future in futures_list if future.running()]
time.sleep(0.5)
sys.stderr.write('submitted all tweets as futures' + '\n')
%%time
enrich_tweets_10(
istream=stream_of_tweets(10),
enrichment_class=CPUBoundEnrichment,
executor_name='ProcessPoolExecutor',
max_workers = 2,
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: The tweet enricher
Step3: Enrichment classes
Step5: Convenience and simplification
Step6: The problem
Step8: commentary
Step10: commentary
Step12: commentary
Step14: commentary
Step16: commentary
Step18: commentary
Step21: commentary
Step23: commentary
|
1,307
|
<ASSISTANT_TASK:>
Python Code:
xx1 = np.linspace(-4, 4, 100)
xx2 = np.linspace(-4, -2, 100)
xx3 = np.linspace(2, 4, 100)
plt.subplot(3, 1, 1)
plt.fill_between(xx1, sp.stats.norm.pdf(xx1), facecolor='green', alpha=0.1)
plt.fill_between(xx2, sp.stats.norm.pdf(xx2), facecolor='blue', alpha=0.35)
plt.fill_between(xx3, sp.stats.norm.pdf(xx3), facecolor='blue', alpha=0.35)
plt.text(-3, 0.1, "p-value=%5.3f" % (2*sp.stats.norm.cdf(-2)), horizontalalignment='center')
plt.title(r"Test statistics = 2. Two-tailed test. $H_a: \mu \neq 0$")
plt.subplot(3, 1, 2)
plt.fill_between(xx1, sp.stats.norm.pdf(xx1), facecolor='green', alpha=0.1)
plt.fill_between(xx3, sp.stats.norm.pdf(xx3), facecolor='blue', alpha=0.35)
plt.text(3, 0.1, "p-value=%5.3f" % (sp.stats.norm.cdf(-2)), horizontalalignment='center')
plt.title(r"Test statistics = 2. One-tailed test. $H_a: \mu > 0$")
plt.subplot(3, 1, 3)
plt.fill_between(xx1, sp.stats.norm.pdf(xx1), facecolor='green', alpha=0.1)
plt.fill_between(xx2, sp.stats.norm.pdf(xx2), facecolor='blue', alpha=0.35)
plt.text(-3, 0.1, "p-value=%5.3f" % (sp.stats.norm.cdf(-2)), horizontalalignment='center')
plt.title(r"Test statistics = -2. One-tailed test. $H_a: \mu < 0$")
plt.tight_layout()
plt.show()
1 - sp.stats.binom(15, 0.5).cdf(12-1)
x = np.array([-0.025, -0.05, 0.043, -0.037, -0.056])
t = x.mean()/x.std(ddof=1)*np.sqrt(len(x))
t, sp.stats.t(df=4).cdf(t)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 유의 확률의 값이 아주 작으면 귀무 가설이 맞다는 가정하에 계산된 검정 통계량이 나올 가능성이 희귀하다는 의미이다.
Step2: 이 값은 5% 보다는 작고 1% 보다는 크기 때문에 유의 수준이 5% 라면 기각할 수 있으며(즉 공정한 동전이 아니라고 말할 수 있다.) 유의 수준이 1% 라면 기각할 수 없다.(즉, 공정한 동전이 아니라고 말할 수 없다.)
|
1,308
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import pyensae.datasource
pyensae.datasource.download_data("twitter_for_network_100000.db.zip")
import cytoolz as ct # import groupby, valmap, compose
import cytoolz.curried as ctc ## pipe, map, filter, get
import sqlite3
import pprint
try:
import ujson as json
except:
import json
conn_sqlite = sqlite3.connect("twitter_for_network_100000.db")
cursor_sqlite = conn_sqlite.cursor()
import os, psutil, gc, sys
if sys.platform.startswith("win"):
pass
else:
# only on Linux
import resource
def memory_usage_psutil():
gc.collect()
process = psutil.Process(os.getpid())
mem = process.memory_info()[0] / float(2 ** 20)
print( "Memory used : %i MB" % mem )
if not sys.platform.startswith("win"):
print( "Max memory usage : %i MB" % (resource.getrusage(resource.RUSAGE_SELF).ru_maxrss//1024) )
cursor_sqlite.execute("select content FROM tw_users")
with open("tw_users_all.json", 'w') as f:
ct.count( ct.map( f.write, ct.interpose( "\n", ct.pluck(0, cursor_sqlite ) ) ) )
from tkinterquickhelper.funcwin.default_functions import file_split
file_split("tw_users_all.json", 50 )
import dask.bag as dbag
try:
import ujson as json
except:
import json
from operator import add
a = dbag.read_text('tw_users_all.json.split.*.json')
b = a.map(json.loads).pluck("followers_count").fold(add).compute()
%timeit -n1 b = a.map(json.loads).pluck("followers_count").fold(add).compute()
b
useless = [it**2 for it in range(25000000)]
#del useless
import numpy
import dask.array as da
big_random = da.random.normal( 1000, 20000, size = (50000,50000), chunks = (1000,1000) )
big_random.mean().compute()
import numpy as np
import h5py
for it in range(10):
a = np.random.random(size=(5000,5000))
h5f = h5py.File('data_{0:02d}.h5'.format(it), 'w')
h5f.create_dataset('dataset_1', data=a)
h5f.close()
if it % 5 == 0:
print(it)
from glob import glob
import h5py
import dask.array as da
files = list(sorted(glob('data_*.h5'))) * 10
dsets = [h5py.File(fn)['dataset_1'] for fn in files]
arrays = [da.from_array(dset, chunks=(1000, 1000)) for dset in dsets]
x = da.stack(arrays, axis=0)
print(x.shape)
print( x[:,0,0].sum().compute() )
print( x[:,:,:].sum().compute() )
memory_usage_psutil()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Données
Step2: On extrait les données de la base et on les écrit dans un fichiers plat.
Step3: On sépare ce fichier plat en 50 morceaux.
Step4: On crée un objet de type dask.bag pour gérer l'ensemble de ces fichiers.
Step5: On va ensuite pouvoir utiliser la syntaxe cytoolz sur l'objet dbag.
Step6: Attention à l'état de votre mémoire quand vous utilisez dask.
Step7: De la même façon, dask vous permet de paralléliser efficacement des opérations effectuées avec numpy.
Step8: Si vous avez des données obtenues de façon classique sur numpy (ici générées de façon aléatoires)
Step9: dask gère de façon transparente cet ensemble de matrice comme une seule matrice de dimension supérieure. Ci-dessous x est une matrice de taille 100 X 5000 X 5000, ce qui ne tiendrait absolument pas en mémoire.
|
1,309
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import time
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="ticks", color_codes=True, font_scale=1.5)
sns.set_style({"xtick.direction": "in", "ytick.direction": "in"})
from collections import OrderedDict
import h5py
h5file = "data/mueller_x-1_y0.5_dt0.0001.h5"
f = h5py.File(h5file, 'r')
data = np.array(f['data'])
f.close()
# Define the mueller potential according to the V(x,y) shown above
def fmueller(x, y):
Functional form of the mueller potential
x : float
Value of molecular coordinate x.
y : float
Value of molecular coordinate y.
A1, A2, A3, A4 = [-200.0, -100.0, -170.0, 15.0]
a1, a2, a3, a4 = [-1.0, -1.0, -6.5, 0.7]
b1, b2, b3, b4 = [0.0, 0.0, 11.0, 0.6]
c1, c2, c3, c4 = [-10.0, -10.0, -6.5, 0.7]
x01, x02, x03, x04 = [1.0, 0.0, -0.5, -1.0]
y01, y02, y03, y04 = [0.0, 0.5, 1.5, 1.0]
return A1*np.exp(a1*(x-x01)**2 + b1*(x-x01)*(y-y01) + c1*(y-y01)**2)+ \
A2*np.exp(a2*(x-x02)**2 + b2*(x-x02)*(y-y02) + c2*(y-y02)**2)+ \
A3*np.exp(a3*(x-x03)**2 + b3*(x-x03)*(y-y03) + c3*(y-y03)**2)+ \
A4*np.exp(a4*(x-x04)**2 + b4*(x-x04)*(y-y04) + c4*(y-y04)**2)
# Discretize trajectory and plot onto the mueller potential
discret = np.zeros(data.shape[0])
statistic, x_edge, y_edge, binnumber = \
scipy.stats.binned_statistic_2d(data[:,1], data[:,2], discret, 'count', \
bins=[np.linspace(np.amin(data[:,1]), np.amax(data[:,1]),6), \
np.linspace(np.amin(data[:,2]), np.amax(data[:,2]),6)], expand_binnumbers=False)
y = np.linspace(-0.5,2.,200)
x = np.linspace(-2.,1.1,200)
M = np.zeros((len(x), len(y)))
lines = []
for i, x_i in enumerate(x):
for j, y_i in enumerate(y):
aux = fmueller(x_i,y_i)
if aux < 100: M[i, j] = aux
X, Y = np.meshgrid(x,y)
fig, ax = plt.subplots(figsize=(6.75, 5.75))
levels = np.arange(-150.0, 20.0, 20)
CS = ax.contour(X, Y, M.T, levels, cmap='binary_r')
ax.clabel(CS, inline=0.9, fontsize=10)
cmaps = OrderedDict()
cmaps['Qualitative'] = ['Pastel1', 'Pastel2', 'Paired', 'Accent', \
'Dark2', 'Set1', 'Set2', 'Set3', \
'tab10', 'tab20', 'tab20b', 'tab20c']
plt.scatter(data[:,1], data[:,2], c=binnumber, cmap=plt.get_cmap('tab20'), s=1)
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.colorbar()
plt.tight_layout()
fig,ax=plt.subplots(3, 1, figsize=(12,4), sharex=True)
plt.subplots_adjust(wspace=0, hspace=0)
ax[0].plot(range(0,len(data[:,1])), data[:,1], lw=0.5)
ax[1].plot(range(0,len(data[:,2])), data[:,2], color="g", lw=0.5)
ax[2].plot(binnumber, lw=0.25)
ax[0].set_ylabel('x')
ax[1].set_ylabel('y')
ax[2].set_ylabel("state")
ax[2].set_xlabel("Time", fontsize=20)
ax[0].set_xlim(0,50000)
plt.show
# MASTER EQUATION MODEL
from mastermsm.trajectory import traj
from mastermsm.msm import msm
distraj = traj.TimeSeries(distraj=list(binnumber), dt=1)
distraj.find_keys()
distraj.keys.sort()
msm_2D = msm.SuperMSM([distraj],sym=True)
# CONVERGENCE test
import random
rand = lambda: random.randint(0, 255)
msm_2D.convergence_test(time=[1, 5, 10, 50, 100, 500, 1000], error=True)
fig, ax = plt.subplots()
for i in range(0,6):
tau_vs_lagt = np.array([[x, msm_2D.msms[x].tauT[i], msm_2D.msms[x].tau_std[i]] \
for x in sorted(msm_2D.msms.keys())])
ax.errorbar(tau_vs_lagt[:,0], tau_vs_lagt[:,1], tau_vs_lagt[:,2], marker='o', ms=7)
ax.set_xlabel(r'$\Delta$t ', fontsize=16)
ax.set_ylabel(r'$\tau_i$', fontsize=16)
ax.set_xscale('log')
ax.set_yscale('log')
ax.fill_between(10**np.arange(-1,4,0.2), 1e-1, 10**np.arange(-1,4,0.2), facecolor='lightgray', alpha=0.5)
ax.set_ylim(1e-1,1000)
ax.set_xlim(0.7,1400)
plt.tight_layout()
lagt=10
msm_2D.do_msm(lagt)
msm_2D.msms[lagt].do_trans(evecs=True)
# VALIDATION - AUTOCORRELATION FUNCTION (acf)
acf = msm_2D.msms[lagt].acf_mode()
time = np.arange(len(acf[1]))*msm_2D.data[0].dt
fig, ax = plt.subplots()
ax.plot(time, acf[1], 'o', label='ACF(left-eig)')
ax.plot(time,np.exp(-time*1./msm_2D.msms[lagt].tauT[0]), \
label='exp(-t/%5.3f)' % msm_2D.msms[lagt].tauT[0])
ax.set_ylim(0,1)
ax.set_xlim(0,1300)
ax.set_xlabel('Time')
ax.set_ylabel('C$_{11}$(t)')
ax.legend()
plt.show
# Spectrum of relaxation times
fig, ax = plt.subplots()
ax.plot(range(1,len(msm_2D.msms[lagt].tauT)+1), \
np.log(msm_2D.msms[lagt].tauT), 'x-', markersize=12, linewidth=4)
ax.set_xlabel('Eigenvalue')
ax.set_ylabel(r'log $\tau_i$')
plt.tight_layout()
plt.show
fig, ax = plt.subplots(figsize=(10,4))
ax.plot(msm_2D.msms[lagt].rvecsT[:,1])
ax.fill_between(range(len(msm_2D.msms[lagt].rvecsT[:,1])), 0, \
msm_2D.msms[lagt].rvecsT[:,1], \
where=msm_2D.msms[lagt].rvecsT[:,1]>0,\
facecolor='c', interpolate=True,alpha=.4)
ax.fill_between(range(len(msm_2D.msms[lagt].rvecsT[:,1])), 0, \
msm_2D.msms[lagt].rvecsT[:,1], \
where=msm_2D.msms[lagt].rvecsT[:,1]<0,\
facecolor='g', interpolate=True,alpha=.4)
ax.set_ylabel("$\Psi^R_1$")
plt.show
# 2D projection
binnumber2 = np.zeros(len(data[:,1]))
for i in range(len(binnumber)-1):
j = binnumber[i]
for k in range(len(msm_2D.msms[lagt].keep_keys)-1):
if j==msm_2D.msms[lagt].keep_keys[k]:
binnumber2[i] = np.log(msm_2D.msms[lagt].peqT[k])
break
y = np.linspace(-0.5,2.,200)
x = np.linspace(-2.,1.1,200)
M = np.zeros((len(x), len(y)))
lines = []
for i, x_i in enumerate(x):
for j, y_i in enumerate(y):
aux = fmueller(x_i,y_i)
if aux<100: M[i, j] = aux
X, Y = np.meshgrid(x,y)
fig, ax = plt.subplots()
levels = np.arange(-130.0, 40.0, 20)
CS = ax.contour(X, Y, M.T, levels)
ax.clabel(CS, inline=0.9, fontsize=6)
cmaps = OrderedDict()
cmaps['Diverging'] = ['PiYG', 'PRGn', 'BrBG', 'PuOr', 'RdGy', 'RdBu',
'RdYlBu', 'RdYlGn', 'Spectral', 'coolwarm', 'bwr', 'seismic']
plt.scatter(data[:,1],data[:,2],c=binnumber2,cmap=plt.get_cmap('seismic'))
plt.colorbar()
plt.title('log of equilibrium distribution')
plt.show
import hdbscan
X = data[:,[1,2]]
hb = hdbscan.HDBSCAN(min_cluster_size=200).fit(X)
labels = hb.labels_
for i, x_i in enumerate(labels):
if hb.probabilities_[i] < 0.1:
labels[i] = -1
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
colors = ['royalblue', 'maroon', 'forestgreen', 'mediumorchid', \
'tan', 'deeppink', 'olive', 'goldenrod', 'lightcyan', 'lightgray']
vectorizer = np.vectorize(lambda x: colors[x % len(colors)])
plt.scatter(X[:,0],X[:,1], c=vectorizer(labels))
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show
hb.condensed_tree_.plot()
plt.title('tree')
plt.show
# remove noise (label=-1) to obtain a trajectory for MasterMSM
last = labels[0]
for i, x_i in enumerate(labels):
if x_i == -1:
labels[i] = last
else:
last = x_i
labels[0] = labels[1]
cluster = traj.TimeSeries(distraj=list(labels), dt=1)
cluster.find_keys()
cluster.keys.sort()
msm_cluster = msm.SuperMSM([cluster],sym=True)
lagt=5 # lag time
msm_cluster.do_msm(lagt)
msm_cluster.msms[lagt].do_trans(evecs=True)
fig, ax = plt.subplots()
ax.plot(range(1,len(msm_cluster.msms[lagt].tauT)+1), \
np.log(msm_cluster.msms[lagt].tauT), 'o-', label='HDBSCAN')
# Note for msm_2D we chose lagt=10
ax.plot(range(1,len(msm_cluster.msms[lagt].tauT)+1), \
np.log(msm_2D.msms[10].tauT[:len(msm_cluster.msms[lagt].tauT)]), \
'x-', label='2D binning')
ax.set_xlabel('Eigenvalue')
ax.set_ylabel(r'log $\tau_i$')
plt.tight_layout()
plt.legend()
plt.title('log_mueller_relax_times.png')
plt.show
msm_cluster.do_lbrate()
msm_cluster.msms[lagt].do_rate(method='MLPB', evecs=True, init=msm_cluster.lbrate)
fig, ax = plt.subplots(2,1, figsize=(6,4), sharex=False)
plt.subplots_adjust(wspace=0.9, hspace=0.9)
ax[0].plot(msm_cluster.msms[lagt].rvecsK[:,1])
ax[0].fill_between(range(len(msm_cluster.msms[lagt].rvecsK[:,1])), \
0, msm_cluster.msms[lagt].rvecsK[:,1], alpha=.4)
ax[1].plot(msm_cluster.msms[lagt].lvecsK[:,1])
ax[1].axhline(0,0,25, c='k', ls='--', lw=1)
ax[1].fill_between(range(len(msm_cluster.msms[lagt].lvecsK[:,1])), \
msm_cluster.msms[lagt].lvecsK[:,1], 0, alpha=.4)
ax[0].set_ylabel("$\Psi^R_1$")
ax[1].set_ylabel("$\Psi^L_1$")
plt.tight_layout(h_pad=0)
ax[0].set_title('1st_eigenvectors_R_L.png')
plt.show
plt.figure()
plt.imshow(np.log10(msm_cluster.msms[lagt].count), \
interpolation='none', cmap='viridis_r', origin='lower')
plt.ylabel('$\it{j}$')
plt.xlabel('$\it{i}$')
plt.colorbar()
plt.title('Count matrix (log), $\mathbf{N}$')
plt.show
plt.imshow(np.log10(msm_cluster.msms[lagt].trans), interpolation='none', \
cmap='viridis_r', origin='lower')
plt.ylabel('$\it{j}$')
plt.xlabel('$\it{i}$')
plt.colorbar()
plt.title('Transition matrix (log), $\mathbf{T}$')
plt.show
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download brownian trajectory from here, and read the latter by using trajectory from COSSIO generator.
Step3: Trajectory analysis and assignment
Step4: MSM
Step5: MSM
Step6: The first mode captured by $\lambda_1$ is significantly slower than the others. That mode, which is described by the right eigenvector $\psi^R_1$ as the transition between the two main minima. Next plots correspond to the eigenvector and its projection on the mueller potential.
Step7: HDBSCAN clustering
Step8: MSM validation
Step9: RATE matrix analysis
Step10: COUNT and TRANSITION matrices
|
1,310
|
<ASSISTANT_TASK:>
Python Code:
import gzip
import cPickle as pickle
with gzip.open("../data/train.pklz", "rb") as train_file:
train_set = pickle.load(train_file)
with gzip.open("../data/test.pklz", "rb") as test_file:
test_set = pickle.load(test_file)
with gzip.open("../data/questions.pklz", "rb") as questions_file:
questions = pickle.load(questions_file)
print train_set[1]
print questions[1].keys()
X = []
Y = []
avg_time_per_user = {}
avg_time_per_que = {}
for key in train_set:
# We only care about positive case at this time
#if train_set[key]['position'] < 0:
# continue
uid = train_set[key]['uid']
qid = train_set[key]['qid']
pos = train_set[key]['position']
q_length = max(questions[qid]['pos_token'].keys())
category = questions[qid]['category'].lower()
answer = questions[qid]['answer'].lower()
# Calculate average response time per user
temp = 0; num = 0
if uid not in avg_time_per_user.keys():
for keysubset in train_set:
if train_set[keysubset]['uid'] == uid:
temp += train_set[keysubset]['position']
num += 1
avg_time_per_user[uid] = temp/num
temp=0; num = 0
# Calculate average response time per question
temp=0; num = 0
if qid not in avg_time_per_que.keys():
for keysubset in train_set:
if train_set[keysubset]['qid'] == qid:
temp += train_set[keysubset]['position']
num += 1
avg_time_per_que[qid] = temp/num
temp=0; num = 0
feat = {"uid": str(uid), "qid": str(qid), "q_length": q_length, "category": category, "answer": answer, "avg_per_uid": avg_time_per_user[uid], "avg_per_qid":avg_time_per_que[qid]}
X.append(feat)
Y.append([pos])
print len(X)
print len(Y)
print X[0], Y[0]
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
X = vec.fit_transform(X)
print X[0]
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet
from sklearn.cross_validation import train_test_split, cross_val_score
import math
X_train, X_test, Y_train, Y_test = train_test_split (X, Y)
regressor = LinearRegression()
scores = cross_val_score(regressor, X, Y, cv=10, scoring= 'mean_squared_error')
# Flip the sign of MSE and take sqrt of that values.
for ii in xrange(len(scores)):
scores[ii] = math.sqrt(-1*scores[ii])
print 'Linear Cross validation RMSE scores:', scores.mean()
print scores
regressor = Ridge()
scores = cross_val_score(regressor, X, Y, cv=10, scoring= 'mean_squared_error')
# Flip the sign of MSE and take sqrt of that values.
for ii in xrange(len(scores)):
scores[ii] = math.sqrt(-1*scores[ii])
print 'Ridge Cross validation RMSE scores:', scores.mean()
print scores
regressor = Lasso()
scores = cross_val_score(regressor, X, Y, cv=10, scoring= 'mean_squared_error')
# Flip the sign of MSE and take sqrt of that values.
for ii in xrange(len(scores)):
scores[ii] = math.sqrt(-1*scores[ii])
print 'Lasso Cross validation RMSE scores:', scores.mean()
print scores
regressor = ElasticNet()
scores = cross_val_score(regressor, X, Y, cv=10, scoring= 'mean_squared_error')
# Flip the sign of MSE and take sqrt of that values.
for ii in xrange(len(scores)):
scores[ii] = math.sqrt(-1*scores[ii])
print 'ElasticNet Cross validation RMSE scores:', scores.mean()
print scores
a = [{1: 2}, {2: 3}]
b = [{3: 2}, {4: 3}]
c = a + b
print c[:len(a)]
print c[len(a):]
X_train = []
Y_train = []
for key in train_set:
# We only care about positive case at this time
#if train_set[key]['position'] < 0:
# continue
uid = train_set[key]['uid']
qid = train_set[key]['qid']
pos = train_set[key]['position']
q_length = max(questions[qid]['pos_token'].keys())
category = questions[qid]['category'].lower()
answer = questions[qid]['answer'].lower()
feat = {"uid": str(uid), "qid": str(qid), "q_length": q_length, "category": category, "answer": answer}
X_train.append(feat)
Y_train.append(pos)
X_test = []
Y_test = []
for key in test_set:
uid = test_set[key]['uid']
qid = test_set[key]['qid']
q_length = max(questions[qid]['pos_token'].keys())
category = questions[qid]['category'].lower()
answer = questions[qid]['answer'].lower()
feat = {"uid": str(uid), "qid": str(qid), "q_length": q_length, "category": category, "answer": answer}
X_test.append(feat)
Y_test.append(key)
print "Before transform: ", len(X_test)
X_train_length = len(X_train)
X = vec.fit_transform(X_train + X_test)
X_train = X[:X_train_length]
X_test = X[X_train_length:]
regressor = Ridge()
regressor.fit(X_train, Y_train)
predictions = regressor.predict(X_test)
predictions = sorted([[id, predictions[index]] for index, id in enumerate(Y_test)])
print len(predictions)
predictions[:5]
import csv
predictions.insert(0,["id", "position"])
with open('guess.csv', 'wb') as fp:
writer = csv.writer(fp, delimiter=',')
writer.writerows(predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make training set
Step2: It means that user 0 tried to solve question number 1 which has 77 tokens for question and he or she answered at 61st token.
Step3: Here is 4749 predictions.
|
1,311
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
legislatorsData = pd.read_csv("../data/legislators.csv")
legislatorsData.head()
legislatorsData.columns
legislators = pd.DataFrame(legislatorsData)
legislators.head()
from urllib2 import Request, urlopen
import json
from pandas.io.json import json_normalize
def requestWords( id ):
id = str(id)
url = "http://capitolwords.org/api/1/phrases.json?entity_type=legislator&entity_value="+id+"&apikey=0bf8e7eb6ce146f48217bfee767c998d"
request=Request(url)
response = urlopen(request)
contents = response.read()
len(contents)
if len(contents) > 2:
data = json.loads(contents)
words = json_normalize(data)
list_of_words = words.ngram.tolist()
string_of_words ="|".join(list_of_words)
return string_of_words
else:
return np.nan
legislators['favorite_words'] = legislators.apply(lambda row: requestWords(row['bioguide_id']),axis=1)
print legislators.favorite_words.head(3)
print "All entries before getting rid of entris with no words:", len(legislators.favorite_words)
legislators_words = legislators[legislators.favorite_words.notnull()]
print "Number of legislators with word record:", len(legislators_words.favorite_words)
favorite_words = legislators_words.favorite_words.str.get_dummies(sep = "|")
print favorite_words.head(3)
favorite_words.columns[:100]
favorite_words.shape
favorite_words.columns[260:300]
favorite_words.columns[760:800]
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df=1)
vectorizer
corpus = favorite_words.columns.tolist()
corpus[:3]
X = vectorizer.fit_transform(corpus)
analyze = vectorizer.build_analyzer()
print analyze("economy a this")
vectorizer.get_feature_names()[910:920]
vectorizer.vocabulary_.get('document') #not seen in the training corpus will be completely ignored in future calls to the transform method
vectorizer.transform(['Something completely unrelated']).toarray()
from sklearn.feature_extraction.text import TfidfTransformer
transformer = TfidfTransformer()
transformer
tfidf = transformer.fit_transform(favorite_words)
tfidf_array = tfidf.toarray()
tfidf_array.shape
tfidf_array[20].max()
transformer.idf_
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(min_df=1)
vectorizer.fit_transform(corpus)
vec_idf = vectorizer.idf_
print len(vec_idf)
words_weight = pd.DataFrame(tfidf_array, index=legislators_words.index , columns=corpus)
print legislators_words.index
print words_weight.index
capitol_words = legislators_words.merge(words_weight, right_index=True, left_index=True)
capitol_words.head()
del capitol_words["a"]
column_names_capitol = capitol_words.columns.tolist()
word_column_names = column_names_capitol[806:]
number_column_names = column_names_capitol[30:805]
capitol_words[word_column_names].head()
capitol_words[word_column_names].sum().max()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is what the data looks like
Step2: Corpus
Step3: Biagram
Step4: Vectorizer
|
1,312
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('../Data/kc_house_data.gl/')
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list
print example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'
print example_output[0] # and the corresponding output
print [sales[0]['constant'],sales[0]['sqft_living']]
print sales[0]['price']
my_weights = np.array([1., 1.]) # the example weights
my_features = example_features[0,] # we'll use the first data point
predicted_value = np.dot(my_features, my_weights)
print predicted_value
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
test_predictions = predict_output(example_features, my_weights)
print test_predictions[0] # should be 1181.0
print test_predictions[1] # should be 2571.0
def feature_derivative(errors, feature):
# Assume that errors and feature are both numpy arrays of the same length (number of data points)
# compute twice the dot product of these vectors as 'derivative' and return the value
derivative = 2 * np.dot(errors,feature)
return(derivative)
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([0., 0.]) # this makes all the predictions 0
test_predictions = predict_output(example_features, my_weights)
# just like SFrames 2 numpy arrays can be elementwise subtracted with '-':
errors = test_predictions - example_output # prediction errors in this case is just the -example_output
feature = example_features[:,0] # let's compute the derivative with respect to 'constant', the ":" indicates "all rows"
derivative = feature_derivative(errors, feature)
print derivative
print -np.sum(example_output)*2 # should be the same as derivative
from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights) # make sure it's a numpy array
while not converged:
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
gradient_sum_squares = 0 # initialize the gradient sum of squares
# while we haven't reached the tolerance yet, update each feature's weight
for i in range(weights.size):
# for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
# compute the derivative for weight[i]:
derivative = feature_derivative(errors, feature_matrix[:, i])
# add the squared value of the derivative to the gradient magnitude (for assessing convergence)
gradient_sum_squares += np.power(derivative, 2)
# subtract the step size times the derivative from the current weight
weights[i] -= step_size * derivative
# compute the square-root of the gradient sum of squares to get the gradient matnigude:
gradient_magnitude = sqrt(gradient_sum_squares)
if gradient_magnitude < tolerance:
converged = True
return(weights)
train_data,test_data = sales.random_split(.8,seed=0)
# let's test out the gradient descent
simple_features = ['sqft_living']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
initial_weights = np.array([-47000., 1.])
step_size = 7e-12
tolerance = 2.5e7
simple_weights = regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, tolerance)
print simple_weights
print np.round(simple_weights[1:2],1)
(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
test_simple_predictions = predict_output(test_simple_feature_matrix, simple_weights)
print np.round(test_simple_predictions[0:1],0)
test_simple_residuals = test_simple_predictions - test_data['price']
test_simple_rss = sum(pow(test_simple_residuals,2))
print test_simple_rss
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
initial_weights = np.array([-100000., 1., 1.])
step_size = 4e-12
tolerance = 1e9
multi_weights = regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance)
print multi_weights
(multi_feature_matrix, multi_output) = get_numpy_data(test_data, model_features, my_output)
multi_predictions = predict_output(multi_feature_matrix, multi_weights)
print np.round(multi_predictions[0:1],0)
print test_data[0]['price']
multi_residuals = multi_predictions - test_data['price']
multi_rss = sum(pow(multi_residuals,2))
print multi_rss
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.
Step3: Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things
Step4: For testing let's use the 'sqft_living' feature and a constant as our features and price as our output
Step5: Predicting output given regression weights
Step6: np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights
Step7: If you want to test your code run the following cell
Step8: Computing the Derivative
Step9: To test your feature derivartive run the following
Step10: Gradient Descent
Step11: A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features.
Step12: Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimate the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model
Step13: Next run your gradient descent with the above parameters.
Step14: How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)?
Step15: Use your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first
Step16: Now compute your predictions using test_simple_feature_matrix and your weights from above.
Step17: Quiz Question
Step18: Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
Step19: Running a multiple regression
Step20: Use the above parameters to estimate the model weights. Record these values for your quiz.
Step21: Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
Step22: Quiz Question
Step23: What is the actual price for the 1st house in the test data set?
Step24: Quiz Question
|
1,313
|
<ASSISTANT_TASK:>
Python Code:
#from imp import *
#s=load_source('sygma','/home/nugrid/nugrid/SYGMA/SYGMA_online/SYGMA_dev/sygma.py')
#%pylab nbagg
import sys
import sygma as s
print s.__file__
reload(s)
s.__file__
#import matplotlib
#matplotlib.use('nbagg')
import matplotlib.pyplot as plt
#matplotlib.use('nbagg')
import numpy as np
from scipy.integrate import quad
from scipy.interpolate import UnivariateSpline
import os
# Trigger interactive or non-interactive depending on command line argument
__RUNIPY__ = sys.argv[0]
if __RUNIPY__:
%matplotlib inline
else:
%pylab nbagg
k_N=1e11*0.35/ (1**-0.35 - 30**-0.35) #(I)
N_tot=k_N/1.35 * (1**-1.35 - 30**-1.35) #(II)
print N_tot
Yield_tot=0.1*N_tot
print Yield_tot/1e11
import sygma as s
reload(s)
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,imf_type='salpeter',imf_bdys=[1,30],iniZ=0.02,hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
#% matplotlib inline
import read_yields as ry
path = os.environ['SYGMADIR']+'/yield_tables/agb_and_massive_stars_nugrid_MESAonly_fryer12delay.txt'
#path='/home/christian/NuGrid/SYGMA_PROJECT/NUPYCEE/new/nupycee.bitbucket.org/yield_tables/isotope_yield_table.txt'
ytables = ry.read_nugrid_yields(path,excludemass=[32,60])
zm_lifetime_grid=s1.zm_lifetime_grid_current #__interpolate_lifetimes_grid()
#return [[metallicities Z1,Z2,...], [masses], [[log10(lifetimesofZ1)],
# [log10(lifetimesofZ2)],..] ]
#s1.__find_lifetimes()
#minm1 = self.__find_lifetimes(round(self.zmetal,6),mass=[minm,maxm], lifetime=lifetimemax1)
print Yield_tot_sim
print Yield_tot
print 'ratio should be 1 : ',Yield_tot_sim/Yield_tot
Yield_agb= ( k_N/1.35 * (1**-1.35 - 8.**-1.35) ) * 0.1
Yield_massive= ( k_N/1.35 * (8.**-1.35 - 30**-1.35) ) * 0.1
print 'Should be 1:',Yield_agb/s1.history.ism_iso_yield_agb[-1][0]
print 'Should be 1:',Yield_massive/s1.history.ism_iso_yield_massive[-1][0]
print 'Test total number of SNII agree with massive star yields: ',sum(s1.history.sn2_numbers)*0.1/Yield_massive
print sum(s1.history.sn2_numbers)
s1.plot_totmasses(source='agb')
s1.plot_totmasses(source='massive')
s1.plot_totmasses(source='all')
s1.plot_totmasses(source='sn1a')
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\
imf_bdys=[1,30],iniZ=0,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
s1.plot_mass(specie='H',label='H, sim',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.35 * (m**-1.35 - 30.**-1.35) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=15,label='H, semi')
plt.legend(loc=4)
k_N=1e11*0.35/ (5**-0.35 - 20**-0.35)
N_tot=k_N/1.35 * (5**-1.35 - 20**-1.35)
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',\
imf_bdys=[5,20],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print 'Sould be 1:' ,Yield_tot_sim/Yield_tot
k_N=1e11*0.35/ (1**-0.35 - 5**-0.35)
N_tot=k_N/1.35 * (1**-1.35 - 5**-1.35)
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\
imf_bdys=[1,5],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print 'Sould be 1: ',Yield_tot_sim/Yield_tot
alphaimf = 1.5 #Set test alphaimf
k_N=1e11*(alphaimf-2)/ (-1**-(alphaimf-2) + 30**-(alphaimf-2))
N_tot=k_N/(alphaimf-1) * (-1**-(alphaimf-1) + 30**-(alphaimf-1))
Yield_tot=0.1*N_tot
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='alphaimf',alphaimf=1.5,imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print 'Should be 1 :',Yield_tot/Yield_tot_sim
def imf_times_m(mass):
if mass<=1:
return 0.158 * np.exp( -np.log10(mass/0.079)**2 / (2.*0.69**2))
else:
return mass*0.0443*mass**(-2.3)
k_N= 1e11/ (quad(imf_times_m,0.01,30)[0] )
N_tot=k_N/1.3 * 0.0443* (1**-1.3 - 30**-1.3)
Yield_tot=N_tot * 0.1
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='chabrier',imf_bdys=[0.01,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print Yield_tot
print Yield_tot_sim
print 'Should be 1 :',Yield_tot/Yield_tot_sim
plt.figure(11)
s1.plot_mass(fig=11,specie='H',label='H',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.3 * 0.0443*(m**-1.3 - 30.**-1.3) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
def imf_times_m(mass):
p0=1.
p1=0.08**(-0.3+1.3)
p2=0.5**(-1.3+2.3)
p3= 1**(-2.3+2.3)
if mass<0.08:
return mass*p0*mass**(-0.3)
elif mass < 0.5:
return mass*p1*mass**(-1.3)
else: #mass>=0.5:
return mass*p1*p2*mass**(-2.3)
k_N= 1e11/ (quad(imf_times_m,0.01,30)[0] )
p1=0.08**(-0.3+1.3)
p2=0.5**(-1.3+2.3)
N_tot=k_N/1.3 * p1*p2*(1**-1.3 - 30**-1.3)
Yield_tot=N_tot * 0.1
reload(s)
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='kroupa',imf_bdys=[0.01,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield[-1][0]
print 'Should be 1: ',Yield_tot/Yield_tot_sim
plt.figure(111)
s1.plot_mass(fig=111,specie='H',label='H',color='k',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
def yields(m,k_N):
return ( k_N/1.3 *p1*p2* (m**-1.3 - 30.**-1.3) ) * 0.1
yields1=[]
for m1 in m:
yields1.append(yields(m1,k_N))
plt.plot(ages,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=False,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print (s1.history.ism_elem_yield_1a[0]),(s1.history.ism_elem_yield_1a[-1])
print (s1.history.ism_elem_yield[0]),(s1.history.ism_elem_yield[-1])
print (s2.history.ism_elem_yield_1a[0]),(s2.history.ism_elem_yield_1a[-1])
print (s2.history.ism_elem_yield[0]),(s2.history.ism_elem_yield[-1])
print (s1.history.ism_elem_yield[-1][0] + s2.history.ism_elem_yield_1a[-1][0])/s2.history.ism_elem_yield[-1][0]
s2.plot_mass(fig=33,specie='H-1',source='sn1a') #plot s1 data (without sn) cannot be plotted -> error, maybe change plot function?
#import read_yields as ry
import sygma as s
reload(s)
plt.figure(99)
#interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid
#ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt')
#zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='exp',
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s1.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
plt.plot(grid_masses,grid_lifetimes,label='spline fit grid points (SYGMA)')
plt.xlabel('Mini/Msun')
plt.ylabel('log lifetime')
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
plt.plot(np.array(m),np.log10(np.array(ages)),marker='+',markersize=20,label='input yield grid',linestyle='None')
plt.plot(10**spline_lifetime(np.log10(ages)),np.log10(ages),linestyle='--',label='spline fit SNIa')
plt.legend()
#plt.yscale('log')
#print grid_lifetimes
#print grid_masses
#10**spline_lifetime(np.log10(7.902e6))
#following inside function wiersma09_efolding
#if timemin ==0:
# timemin=1
from scipy.integrate import dblquad
def spline1(x):
#x=t
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
#if self.imf_bdys[0]>3:
# minm_prog1a=self.imf_bdys[0]
return max(minm_prog1a,10**spline_lifetime(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
#if maximum progenitor mass is smaller than 8Msun due to IMF range:
#if 8>self.imf_bdys[1]:
# maxm_prog1a=self.imf_bdys[1]
if mlim>maxm_prog1a:
return 0
else:
#Delay time distribution function (DTD)
tau= 2e9
mmin=0
mmax=0
inte=0
#follwing is done in __imf()
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#print 'IMF test',norm*m**-2.35
#imf normalized to 1Msun
return norm*m**-2.35* np.exp(-t/tau)/tau
a= 0.01 #normalization parameter
#if spline(np.log10(t))
#a=1e-3/()
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
# in principle since normalization is set: nb_1a_per_m the above calculation is not necessary anymore
Yield_tot=n1a*1e11*0.1 *1 #7 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be : ', Yield_tot_sim/Yield_tot
s1.plot_mass(specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
a= 0.01 #normalization parameter
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
sum(s1.wd_sn1a_range1)/sum(s1.wd_sn1a_range)
s1.plot_sn_distr(xaxis='time',fraction=False)
reload(s)
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
from scipy.integrate import dblquad
def spline1(x):
#x=t
return max(3.,10**spline(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline(np.log10(t))
#print 'mlim',mlim
if mlim>8.:
#print t
#print mlim
return 0
else:
#mmin=max(3.,massfunc(t))
#mmax=8.
#imf=self.__imf(mmin,mmax,1)
#Delay time distribution function (DTD)
tau= 1e9 #3.3e9 #characteristic delay time
sigma=0.66e9#0.25*tau
#sigma=0.2#narrow distribution
#sigma=0.5*tau #wide distribution
mmin=0
mmax=0
inte=0
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#imf normalized to 1Msun
return norm*m**-2.35* 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-(t-tau)**2/(2*sigma**2))
#a= 0.0069 #normalization parameter
#if spline(np.log10(t))
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
Yield_tot=n1a*1e11*0.1 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be 1: ', Yield_tot_sim/Yield_tot
s2.plot_mass(fig=988,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
sum(s2.wd_sn1a_range1)/sum(s2.wd_sn1a_range)
import sygma as s
reload(s)
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e8,tend=1.3e10,sn1a_rate='maoz',imf_type='salpeter',
imf_bdys=[1,30],special_timesteps=-1,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
from scipy.interpolate import UnivariateSpline
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
from scipy.integrate import quad
def spline1(t):
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
return max(minm_prog1a,10**spline_lifetime(np.log10(t)))
#funciton giving the total (accummulatitive) number of WDs at each timestep
def wd_number(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
if mlim>maxm_prog1a:
return 0
else:
mmin=0
mmax=0
inte=0
#normalized to 1msun!
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
return norm*m**-2.35 #self.__imf(mmin,mmax,inte,m)
def maoz_sn_rate(m,t):
return wd_number(m,t)* 4.0e-13 * (t/1.0e9)**-1
def maoz_sn_rate_int(t):
return quad( maoz_sn_rate,spline1(t),8,args=t)[0]
#in this formula, (paper) sum_sn1a_progenitors number of
maxm_prog1a=8
longtimefornormalization=1.3e10 #yrs
fIa=0.00147
fIa=1e-3
#A = (fIa*s2.number_stars_born[1]) / quad(maoz_sn_rate_int,0,longtimefornormalization)[0]
A = 1e-3 / quad(maoz_sn_rate_int,0,longtimefornormalization)[0]
print 'Norm. constant A:',A
n1a= A* quad(maoz_sn_rate_int,0,1.3e10)[0]
Yield_tot=n1a*1e11*0.1 #specialfactor
print Yield_tot_sim
print Yield_tot
print 'Should be 1: ', Yield_tot_sim/Yield_tot
s2.plot_mass(fig=44,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
#yields= a* dblquad(wdfrac,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1
yields= A*quad(maoz_sn_rate_int,0,t)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
plt.legend(loc=3)
import sygma as s
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
stellar_param_on=False)
print 'Should be 0: ',s1.history.age[0]
print 'Should be 1: ',s1.history.age[-1]/1.3e10
print 'Should be 1: ',s1.history.timesteps[0]/1e7
print 'Should be 1: ',s1.history.timesteps[-1]/1e7
print 'Should be 1: ',sum(s1.history.timesteps)/1.3e10
import sygma as s
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.5e9,special_timesteps=200,imf_type='salpeter',
imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print 'Should be 0: ',s2.history.age[0]
print 'Should be 1: ',s2.history.age[-1]/1.5e9
print 'Should be 201: ',len(s2.history.age)
print 'Should be 1: ',s2.history.timesteps[0]/1e7
#print 'in dt steps: ',s2.history.timesteps[1]/1e7,s1.history.timesteps[2]/1e7,'..; larger than 1e7 at step 91!'
print 'Should be 200: ',len(s2.history.timesteps)
print 'Should be 1: ',sum(s2.history.timesteps)/1.5e9
plt.figure(55)
plt.plot(s1.history.age[1:],s1.history.timesteps,label='linear (constant) scaled',marker='+')
plt.plot(s2.history.age[1:],s2.history.timesteps,label='log scaled',marker='+')
plt.yscale('log');plt.xscale('log')
plt.xlabel('age/years');plt.ylabel('timesteps/years');plt.legend(loc=4)
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s4=s.sygma(iolevel=0,mgal=1e11,dt=1.3e10,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s5=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=200,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
s6=s.sygma(iolevel=0,mgal=1e11,dt=1.3e10,tend=1.3e10,special_timesteps=200,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)
#print s3.history.ism_iso_yield[-1][0] == s4.history.ism_iso_yield[-1][0] why false?
print 'should be 1 ',s3.history.ism_iso_yield[-1][0]/s4.history.ism_iso_yield[-1][0]
#print s3.history.ism_iso_yield[-1][0],s4.history.ism_iso_yield[-1][0]
print 'should be 1',s5.history.ism_iso_yield[-1][0]/s6.history.ism_iso_yield[-1][0]
#print s5.history.ism_iso_yield[-1][0],s6.history.ism_iso_yield[-1][0]
s1=s.sygma(iolevel=0,mgal=1e7,dt=1e7,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,mgal=1e8,dt=1e8,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s3=s.sygma(iolevel=0,mgal=1e9,dt=1e9,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
print 'At timestep 0: ',sum(s1.history.ism_elem_yield[0])/1e7,sum(s2.history.ism_elem_yield[0])/1e8,sum(s3.history.ism_elem_yield[0])/1e9
print 'At timestep 0: ',sum(s1.history.ism_iso_yield[0])/1e7,sum(s2.history.ism_iso_yield[0])/1e8,sum(s3.history.ism_iso_yield[0])/1e9
print 'At last timestep, should be the same fraction: ',sum(s1.history.ism_elem_yield[-1])/1e7,sum(s2.history.ism_elem_yield[-1])/1e8,sum(s3.history.ism_elem_yield[-1])/1e9
print 'At last timestep, should be the same fraction: ',sum(s1.history.ism_iso_yield[-1])/1e7,sum(s2.history.ism_iso_yield[-1])/1e8,sum(s3.history.ism_iso_yield[-1])/1e9
reload(s)
s1=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e8,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s2=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
pop3_table='yield_tables/popIII_h1.txt')
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e6,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s4=s.sygma(iolevel=0,mgal=1e11,dt=3e7,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s1.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 1',label2='SNII, rate 1',marker1='o',marker2='s',shape2='-',markevery=1)
s2.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='p',markevery=1,shape2='-.')
s4.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='+',markevery=1,shape2=':',color2='y')
s3.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='x',markevery=1,shape2='--')
plt.xlim(6e6,7e7)
#plt.xlim(6.5e6,4e7)
plt.vlines(7e6,1e2,1e9)
plt.ylim(1e2,1e4)
print s1.history.sn2_numbers[1]/s1.history.timesteps[0]
print s2.history.sn2_numbers[1]/s2.history.timesteps[0]
#print s1.history.timesteps[:5]
#print s2.history.timesteps[:5]
s3=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,
table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt',
stellar_param_on=False)
s4=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',
pop3_table='yield_tables/popIII_h1.txt',stellar_param_on=False)
s3.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, rate',label2='SNII, rate',marker1='o',marker2='s',markevery=1)
s4.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, number',label2='SNII number',marker1='d',marker2='p')
plt.xlim(3e7,1e10)
s1.plot_sn_distr(fig=77,rate=True,marker1='o',marker2='s',markevery=5)
s2.plot_sn_distr(fig=77,rate=True,marker1='x',marker2='^',markevery=1)
#s1.plot_sn_distr(rate=False)
#s2.plot_sn_distr(rate=True)
#s2.plot_sn_distr(rate=False)
plt.xlim(1e6,1.5e10)
#plt.ylim(1e2,1e4)
import sygma as s; reload(s)
s1=s.sygma(iolevel=0,imf_bdys=[1.65,30],transitionmass=8,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
s2=s.sygma(iolevel=0,imf_bdys=[1.65,30],transitionmass=10,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
Yield_tot_sim_8=s1.history.ism_iso_yield_agb[-1][0]
Yield_tot_sim_10=s2.history.ism_iso_yield_agb[-1][0]
alphaimf=2.35
k_N=1e11*(alphaimf-2)/ (-1.65**-(alphaimf-2) + 30**-(alphaimf-2))
N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 8**-(alphaimf-1))
Yield_tot_8=0.1*N_tot
N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 10**-(alphaimf-1))
Yield_tot_10=0.1*N_tot
#N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 5**-(alphaimf-1))
#Yield_tot_5=0.1*N_tot
print '1:',Yield_tot_sim_8/Yield_tot_8
print '1:',Yield_tot_sim_10/Yield_tot_10
#print '1:',Yield_tot_sim_5/Yield_tot_5
s1=s.sygma(starbursts=[0.1,0.1],iolevel=1,mgal=1e11,dt=1e7,imf_type='salpeter',
imf_bdys=[1,30],iniZ=0.02,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s0=s.sygma(iolevel=0,iniZ=0.0001,imf_bdys=[0.01,100],imf_yields_range=[1,100],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: IMF notes
Step2: The total number of stars $N_{tot}$ is then
Step3: With a yield ejected of $0.1 Msun$, the total amount ejected is
Step4: compared to the simulation
Step5: Compare both results
Step6: Test of distinguishing between massive and AGB sources
Step7: Calculating yield ejection over time
Step8: Simulation results in the plot above should agree with semi-analytical calculations.
Step9: Select imf_bdys=[1,5]
Step10: Results
Step11: Test of parameter imf_type
Step12: Chabrier
Step13: Simulation should agree with semi-analytical calculations for Chabrier IMF.
Step14: Simulation results compared with semi-analytical calculations for Kroupa IMF.
Step15: Test of parameter sn1a_rate (DTD)
Step16: Small test
Step17: Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (exp) implementation.
Step18: Wiersmagauss
Step19: Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (Gauss) implementation.
Step20: SNIa implementation
Step21: Check trend
Step22: Test of parameter tend, dt and special_timesteps
Step23: First timestep size of 1e7, then in log space to tend with a total number of steps of 200; Note
Step24: Choice of dt should not change final composition
Step25: Test of parameter mgal - the total mass of the SSP
Step26: Test of SN rate
Step27: Rate does not depend on timestep type
Step28: Test of parameter transitionmass
Step29: 2 starbursts
Step30: imf_yield_range - include yields only in this mass range
|
1,314
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-3', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
1,315
|
<ASSISTANT_TASK:>
Python Code:
# load in the training corpus
with open("../datasets/WSJ_02-21.pos", 'r') as f:
training_corpus = f.readlines() # list
print("A few items of the training corpus list: ")
print(training_corpus[0:5])
len(training_corpus)
# load in the test corpus
with open("../datasets/WSJ_24.pos", 'r') as f:
testing_corpus = f.readlines() # list
print("A sample of the testing corpus")
print(testing_corpus[0:10])
len(testing_corpus)
# read the vocabulary data, split by each line of text, and save the list
with open("../datasets/hmm_vocab.txt", 'r') as f:
voc_l = f.read().split('\n') # list
print("A few items of the vocabulary list")
print(voc_l[0:25])
print()
print("A few items at the end of the vocabulary list")
print(voc_l[-25:])
# vocab: dictionary that has the index of the corresponding words
vocabulary = {}
# Get the index of the corresponding words.
for i, word in enumerate(sorted(voc_l)):
vocabulary[word] = i
len(vocabulary)
print("Vocabulary dictionary: key is the word, value is a unique integer")
cnt = 0
for k,v in vocabulary.items():
print(f"{k}:{v}")
cnt += 1
if cnt > 20:
break
with open("../datasets/test.words", 'r') as f:
y = f.readlines() # list
len(y)
print(y[0:20])
import string
# Punctuation characters
punct = set(string.punctuation)
# Morphology rules used to assign unknown word tokens
noun_suffix = ["action", "age", "ance", "cy", "dom", "ee", "ence", "er", "hood", "ion", "ism", "ist", "ity", "ling", "ment", "ness", "or", "ry", "scape", "ship", "ty"]
verb_suffix = ["ate", "ify", "ise", "ize"]
adj_suffix = ["able", "ese", "ful", "i", "ian", "ible", "ic", "ish", "ive", "less", "ly", "ous"]
adv_suffix = ["ward", "wards", "wise"]
def assign_unk(tok):
Assign unknown word tokens
# Digits
if any(char.isdigit() for char in tok):
return "--unk_digit--"
# Punctuation
elif any(char in punct for char in tok):
return "--unk_punct--"
# Upper-case
elif any(char.isupper() for char in tok):
return "--unk_upper--"
# Nouns
elif any(tok.endswith(suffix) for suffix in noun_suffix):
return "--unk_noun--"
# Verbs
elif any(tok.endswith(suffix) for suffix in verb_suffix):
return "--unk_verb--"
# Adjectives
elif any(tok.endswith(suffix) for suffix in adj_suffix):
return "--unk_adj--"
# Adverbs
elif any(tok.endswith(suffix) for suffix in adv_suffix):
return "--unk_adv--"
return "--unk--"
# Helper: substitues word not in the vocabulary with "unknown"
def get_word_tag(line, vocab):
if not line.split():
word = "--n--"
tag = "--s--"
return word, tag
else:
word, tag = line.split()
if word not in vocab:
# Handle unknown words
word = assign_unk(word)
return word, tag
return None
from collections import defaultdict
def create_dictionaries(corpus, vocab):
Input:
corpus: a corpus where each line has a word followed by its tag.
vocab: a dictionary where keys are words in vocabulary and value is an index
Output:
emission_counts: a dictionary where the keys are (tag, word) and the values are the counts
transition_counts: a dictionary where the keys are (prev_tag, tag) and the values are the counts
tag_counts: a dictionary where the keys are the tags and the values are the counts
# initialize the dictionaries using defaultdict
emission_counts = defaultdict(int)
transition_counts = defaultdict(int)
tag_counts = defaultdict(int)
# Initialize "prev_tag" (previous tag) with the start state, denoted by '--s--'
prev_tag = '--s--'
# use 'i' to track the line number in the corpus
i = 0
# Each item in the training corpus contains a word and its POS tag
# Go through each word and its tag in the training corpus
for word_tag in corpus:
# Increment the word_tag count
i += 1
# Every 50,000 words, print the word count, just to see the progress
if i % 50000 == 0:
print(f"word count = {i}")
# get the word and tag using the get_word_tag helper function
word, tag = get_word_tag(word_tag, vocab)
# Increment the transition count for the previous word and tag
transition_counts[(prev_tag, tag)] += 1
# Increment the emission count for the tag and word
emission_counts[(tag, word)] += 1
# Increment the tag count
tag_counts[tag] += 1
# Set the previous tag to this tag (for the next iteration of the loop)
prev_tag = tag
return emission_counts, transition_counts, tag_counts
emissionCounts, transitionCounts, tagCounts = create_dictionaries(training_corpus, vocabulary)
# get all the POS states
tags = sorted(tagCounts.keys())
print(f"Number of POS tags: {len(tags)}")
print("View these POS tags")
print(tags)
print("transition examples: ")
for ex in list(transitionCounts.items())[:3]:
print(ex)
print()
print("emission examples: ")
for ex in list(emissionCounts.items())[200:203]:
print (ex)
print()
print("ambiguous word example: ")
for tup,cnt in emissionCounts.items():
if tup[1] == 'back': print (tup, cnt)
# we assume that we have the emission dictionary and the tags available
def getWordPOS(word):
Input:
word: the word to be tagged.
Output:
pos_final: the most probable tag for this word; --unknown-- if the word is not in the dictionaries
count_final = 0
pos_final = ''
for pos in tags:
# define the key as the tuple containing the POS and word
key = (pos,word)
# check if the (pos, word) key exists in the emissionCounts dictionary
if key in emissionCounts: # emissionCounts is global
# get the emission count of the (pos,word) tuple
count = emissionCounts[key]
# keep track of the POS with the largest count
if count > count_final:
# update the final count (largest count)
count_final = count
# update the final POS
pos_final = pos
if count_final == 0:
return "- Unknown -"
else:
return pos_final
getWordPOS('dog')
getWordPOS('dogs')
def getSentencePOS(sentence):
Input:
sentence: the sentence to be tagged. String.
Output:
a list of tuples: each word with its most probable tag
words = sentence.split(" ") # split into words (a list)
sentencePOS = [] # second list for the tags
for word in words:
wordPOS = getWordPOS(word) # find the word's POS
sentencePOS.append(wordPOS)
# return a list of tuples: (word, POS)
return list(zip(words, sentencePOS)) # combine the two lists
print(getSentencePOS("I have a black cat"))
print(getSentencePOS("I have not a big dog"))
print(getSentencePOS("A lion is always faster than the turtle"))
print(getSentencePOS("I eat pizza and pasta but I do not eat meat"))
print(getSentencePOS("I work in Shanghai"))
orig = []
y_prepr = []
# we already read the words from testing dataset into 'y'
for cnt, word in enumerate(y):
# End of sentence
if not word.split():
orig.append(word.strip())
word = "--n--"
y_prepr.append(word)
continue
# Handle unknown words
elif word.strip() not in vocabulary:
orig.append(word.strip())
word = assign_unk(word)
y_prepr.append(word)
continue
else:
orig.append(word.strip())
y_prepr.append(word.strip())
assert(len(orig) == len(y)) # just to be sure
assert(len(y_prepr) == len(y))
#corpus without tags, preprocessed
print('The length of the preprocessed testing corpus: ', len(y_prepr))
print('This is a sample of the testing corpus: ')
print(y_prepr[0:20])
# Initialize the number of correct predictions to zero
num_correct = 0
# Get the (tag, word) tuples, stored as a set
all_words = set(emissionCounts.keys())
# Get the number of (word, POS) tuples in the corpus 'y'
total = len(testing_corpus)
for word, y_tup in zip(y_prepr, testing_corpus):
# Split the (word, POS) string into a list of two items
y_tup_l = y_tup.split()
# Verify that y_tup contain both word and POS
if len(y_tup_l) == 2:
# Set the true POS label for this word
true_label = y_tup_l[1]
else:
# If the y_tup didn't contain word and POS, go to next word
continue
count_final = 0
pos_final = ''
if word in vocabulary:
for pos in tags:
# define the key as the tuple containing the POS and word
key = (pos,word)
# check if the (pos, word) key exists in the emissionCounts dictionary
if key in emissionCounts:
# get the emission count of the (pos,word) tuple
count = emissionCounts[key]
# keep track of the POS with the largest count
if count > count_final:
# update the final count (largest count)
count_final = count
# update the final POS
pos_final = pos
# If the final POS (with the largest count) matches the true POS:
if pos_final == true_label:
# Update the number of correct predictions
num_correct += 1
accuracy = num_correct / total
print(f"Accuracy of prediction using predict_pos is {accuracy:.4f}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As you can see, the training_corpus is a list with all words extracted from English articles, together with their POS tag.
Step2: The Testing Corpus is similar, just a subset of the Training one.
Step3: The vocabulary is an indexed list of words; almost 24K of them.
Step4: Testing words
Step7: Parts-of-speech tagging
Step8: We got mapped a total of 46 POS tags which is great
Step9: The transition dictionary shows how often we go from a tag to another, for example from DT (a determiner, an article such as 'the', or 'a') to a NNP (proper noun) is 9044 times
Step11: EmissionCounts dictionary is the most interesting
Step13: So dog is a singular noun (NN) and dogs is a plural noun (NNS).
Step14: What?!
Step15: Ok, back to dogs.
Step16: 'A' and 'The' are articles or determiners (DT), 'is' a verb 3rd person (VBZ), 'always' an adverb (RB) and 'faster' is a comparative (RBR). Still everything correct
Step17: Here 'eat' is a Verb (VB) while 'and', 'but' are conjunctions (CC). Good
Step18: And here you can see that is failing
Step19: Prediction
|
1,316
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-2', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
1,317
|
<ASSISTANT_TASK:>
Python Code:
class Module(object):
def __init__ (self):
self.output = None
self.gradInput = None
self.training = True
Basically, you can think of a module as of a something (black box)
which can process `input` data and produce `ouput` data.
This is like applying a function which is called `forward`:
output = module.forward(input)
The module should be able to perform a backward pass: to differentiate the `forward` function.
More, it should be able to differentiate it if is a part of chain (chain rule).
The latter implies there is a gradient from previous step of a chain rule.
gradInput = module.backward(input, gradOutput)
def forward(self, input):
Takes an input object, and computes the corresponding output of the module.
return self.updateOutput(input)
def backward(self,input, gradOutput):
Performs a backpropagation step through the module, with respect to the given input.
This includes
- computing a gradient w.r.t. `input` (is needed for further backprop),
- computing a gradient w.r.t. parameters (to update parameters while optimizing).
self.updateGradInput(input, gradOutput)
self.accGradParameters(input, gradOutput)
return self.gradInput
def updateOutput(self, input):
Computes the output using the current parameter set of the class and input.
This function returns the result which is stored in the `output` field.
Make sure to both store the data in `output` field and return it.
# The easiest case:
# self.output = input
# return self.output
pass
def updateGradInput(self, input, gradOutput):
Computing the gradient of the module with respect to its own input.
This is returned in `gradInput`. Also, the `gradInput` state variable is updated accordingly.
The shape of `gradInput` is always the same as the shape of `input`.
Make sure to both store the gradients in `gradInput` field and return it.
# The easiest case:
# self.gradInput = gradOutput
# return self.gradInput
pass
def accGradParameters(self, input, gradOutput):
Computing the gradient of the module with respect to its own parameters.
No need to override if module has no parameters (e.g. ReLU).
pass
def zeroGradParameters(self):
Zeroes `gradParams` variable if the module has params.
pass
def getParameters(self):
Returns a list with its parameters.
If the module does not have parameters return empty list.
return []
def getGradParameters(self):
Returns a list with gradients with respect to its parameters.
If the module does not have parameters return empty list.
return []
def training(self):
Sets training mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
self.training = True
def evaluate(self):
Sets evaluation mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
self.training = False
def __repr__(self):
Pretty printing. Should be overrided in every module if you want
to have readable description.
return "Module"
class Sequential(Module):
This class implements a container, which processes `input` data sequentially.
`input` is processed by each module (layer) in self.modules consecutively.
The resulting array is called `output`.
def __init__ (self):
super(Sequential, self).__init__()
self.modules = []
def add(self, module):
Adds a module to the container.
self.modules.append(module)
def updateOutput(self, input):
Basic workflow of FORWARD PASS:
y_0 = module[0].forward(input)
y_1 = module[1].forward(y_0)
...
output = module[n-1].forward(y_{n-2})
Just write a little loop.
# Your code goes here. ################################################
# module = self.modules[0]
# y_curr = module.forward(input)
# for i in range(1, len(self.modules)):
# y_curr = self.modules[i].forward(y_curr)
# self.output = y_curr
# return self.output
#
# self.modules[0].output = self.modules[0].forward(input)
# for i in range(1, len(self.modules)):
# self.modules[i].output = self.modules[i].forward(self.modules[i-1].output)
# self.output = self.modules[-1].output
self.y = []
self.y.append(self.modules[0].forward(input))
for i in range(1, len(self.modules)):
self.y.append(self.modules[i].forward(self.y[-1]))
self.output = self.y[-1]
return self.output
def backward(self, input, gradOutput):
Workflow of BACKWARD PASS:
g_{n-1} = module[n-1].backward(y_{n-2}, gradOutput)
g_{n-2} = module[n-2].backward(y_{n-3}, g_{n-1})
...
g_1 = module[1].backward(y_0, g_2)
gradInput = module[0].backward(input, g_1)
!!!
To ech module you need to provide the input, module saw while forward pass,
it is used while computing gradients.
Make sure that the input for `i-th` layer the output of `module[i]` (just the same input as in forward pass)
and NOT `input` to this Sequential module.
!!!
# Your code goes here. ################################################
# self.modules[-1].gradInput = self.modules[-1].backward(self.modules[-2].output, gradOutput)
# for i in range(len(self.modules) - 2, 0, -1):
# self.modules[i].gradInput = self.modules[i].backward(self.modules[i-1].output, self.modules[i+1].gradInput)
# i = 0
# self.modules[0].gradInput = self.modules[0].backward(input, self.modules[i+1].gradInput)
# self.gradInput = self.modules[0].gradInput
self.gradInput = self.modules[-1].backward(self.y[-2], gradOutput)
for i in range(len(self.modules) - 2, 0, -1):
self.gradInput = self.modules[i].backward(self.y[i-1], self.gradInput)
self.gradInput = self.modules[0].backward(input, self.gradInput)
return self.gradInput
def zeroGradParameters(self):
for module in self.modules:
module.zeroGradParameters()
def getParameters(self):
Should gather all parameters in a list.
return [x.getParameters() for x in self.modules]
def getGradParameters(self):
Should gather all gradients w.r.t parameters in a list.
return [x.getGradParameters() for x in self.modules]
def __repr__(self):
string = "".join([str(x) + '\n' for x in self.modules])
return string
def __getitem__(self,x):
return self.modules.__getitem__(x)
class Linear(Module):
A module which applies a linear transformation
A common name is fully-connected layer, InnerProductLayer in caffe.
The module should work with 2D input of shape (n_samples, n_feature).
def __init__(self, n_in, n_out):
super(Linear, self).__init__()
# This is a nice initialization
stdv = 1./np.sqrt(n_in)
self.W = np.random.uniform(-stdv, stdv, size = (n_out, n_in))
self.b = np.random.uniform(-stdv, stdv, size = n_out)
self.gradW = np.zeros_like(self.W)
self.gradb = np.zeros_like(self.b)
def updateOutput(self, input):
# Your code goes here. ################################################
# N = input.shape[0]
# newx = input.reshape((N,-1))
self.output = input.dot(self.W.T) + self.b
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
# x, dout = input, gradOutput
# N = x.shape[0]
# D = np.prod(x.shape[1:])
# x2 = np.reshape(x, (N, D))
# dx2 = np.dot(dout, w.T) # N x D
# dw = np.dot(x2.T, dout) # D x M
# db = np.dot(dout.T, np.ones(N)) # M x 1
# dx = np.reshape(dx2, x.shape)
# self.gradInput = dx, dw, db #FIXME ?
# self.gradb = np.sum(gradOutput,axis = 0)
self.gradInput = gradOutput.dot(self.W)#.reshape(*input.shape)
# self.gradW = input.reshape((input.shape[0],-1)).T.dot(gradOutput)
return self.gradInput
def accGradParameters(self, input, gradOutput):
# Your code goes here. ################################################
self.gradb = np.sum(gradOutput,axis = 0)
self.gradW = gradOutput.T.dot(input)
# self.gradW = input.reshape((input.shape[0],-1)).T.dot(gradOutput)
# pass
def zeroGradParameters(self):
self.gradW.fill(0)
self.gradb.fill(0)
def getParameters(self):
return [self.W, self.b]
def getGradParameters(self):
return [self.gradW, self.gradb]
def __repr__(self):
s = self.W.shape
q = 'Linear %d -> %d' %(s[1],s[0])
return q
input_dim = 3
output_dim = 2
x = np.random.randn(5, input_dim)
w = np.random.randn(output_dim, input_dim)
b = np.random.randn(output_dim)
dout = np.random.randn(5, output_dim)
linear = Linear(input_dim, output_dim)
def update_W_matrix(new_W):
linear.W = new_W
return linear.forward(x)
def update_bias(new_b):
linear.b = new_b
return linear.forward(x)
dx = linear.backward(x, dout)
dx_num = eval_numerical_gradient_array(lambda x: linear.forward(x), x, dout)
dw_num = eval_numerical_gradient_array(update_W_matrix, w, dout)
db_num = eval_numerical_gradient_array(update_bias, b, dout)
print 'Testing Linear_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, linear.gradW)
print 'db error: ', rel_error(db_num, linear.gradb)
class SoftMax(Module):
def __init__(self):
super(SoftMax, self).__init__()
def updateOutput(self, input):
# start with normalization for numerical stability
self.output = np.subtract(input, input.max(axis=1, keepdims=True))
# Your code goes here. ################################################
self.output = np.exp(self.output)
# out_sum = self.output.sum(axis=1, keepdims=True)
self.output = np.divide(self.output, self.output.sum(axis=1, keepdims=True))
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
# N = self.output.shape[0]
# self.gradInput = self.output.copy()
# self.gradInput[np.arange(N).astype(np.int), gradOutput.astype(np.int)] -= 1
# self.gradInput /= N
batch_size, n_feats = self.output.shape
a = self.output.reshape(batch_size, n_feats, -1)
b = self.output.reshape(batch_size, -1, n_feats)
self.gradInput = np.multiply(gradOutput.reshape(batch_size, -1, n_feats),
np.subtract(np.multiply(np.eye(n_feats), a),
np.multiply(a, b))).sum(axis=2)
return self.gradInput
def __repr__(self):
return "SoftMax"
soft_max = SoftMax()
x = np.random.randn(5, 3)
dout = np.random.randn(5, 3)
dx_numeric = eval_numerical_gradient_array(lambda x: soft_max.forward(x), x, dout)
dx = soft_max.backward(x, dout)
# The error should be around 1e-10
print 'Testing SoftMax grad:'
print 'dx error: ', rel_error(dx_numeric, dx)
class Dropout(Module):
def __init__(self, p=0.5):
super(Dropout, self).__init__()
self.p = p
self.mask = None
def updateOutput(self, input):
# Your code goes here. ################################################
self.mask = np.random.binomial(1, self.p, input.shape) if self.training else np.ones(input.shape)
self.output = input*self.mask
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
self.gradInput = gradOutput*self.mask
return self.gradInput
def __repr__(self):
return "Dropout"
class ReLU(Module):
def __init__(self):
super(ReLU, self).__init__()
def updateOutput(self, input):
self.output = np.maximum(input, 0)
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(gradOutput , input > 0)
return self.gradInput
def __repr__(self):
return "ReLU"
class LeakyReLU(Module):
def __init__(self, slope = 0.03):
super(LeakyReLU, self).__init__()
self.slope = slope
def updateOutput(self, input):
# Your code goes here. ################################################
# self.output = np.maximum(input, input*self.slope)
self.output = input.copy()
self.output[self.output < 0] *= self.slope
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
# self.gradInput = np.multiply(gradOutput, input > 0) #FIXME
self.gradInput = gradOutput.copy()
self.gradInput[input < 0] *= self.slope
return self.gradInput
def __repr__(self):
return "LeakyReLU"
class Criterion(object):
def __init__ (self):
self.output = None
self.gradInput = None
def forward(self, input, target):
Given an input and a target, compute the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateOutput`.
return self.updateOutput(input, target)
def backward(self, input, target):
Given an input and a target, compute the gradients of the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateGradInput`.
return self.updateGradInput(input, target)
def updateOutput(self, input, target):
Function to override.
return self.output
def updateGradInput(self, input, target):
Function to override.
return self.gradInput
def __repr__(self):
Pretty printing. Should be overrided in every module if you want
to have readable description.
return "Criterion"
class MSECriterion(Criterion):
def __init__(self):
super(MSECriterion, self).__init__()
def updateOutput(self, input, target):
self.output = np.sum(np.power(input - target,2)) / input.shape[0]
return self.output
def updateGradInput(self, input, target):
self.gradInput = (input - target) * 2 / input.shape[0]
return self.gradInput
def __repr__(self):
return "MSECriterion"
class ClassNLLCriterion(Criterion):
def __init__(self):
a = super(ClassNLLCriterion, self)
super(ClassNLLCriterion, self).__init__()
def updateOutput(self, input, target):
# Use this trick to avoid numerical errors
eps = 1e-15
input_clamp = np.clip(input, eps, 1 - eps)
# Your code goes here. ################################################
# N = input_clamp.shape[0]
# self.output = -np.sum(np.log(input_clamp[np.arange(N).astype(np.int), target.astype(np.int)]+1e-8)) / N
self.output = -np.sum(np.multiply(target, np.log(input_clamp))) / len(input)
return self.output
def updateGradInput(self, input, target):
# Use this trick to avoid numerical errors
input_clamp = np.maximum(1e-15, np.minimum(input, 1 - 1e-15) )
# Your code goes here. ################################################
self.gradInput = np.subtract(input_clamp, target) / len(input)
return self.gradInput
def __repr__(self):
return "ClassNLLCriterion"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step12: Module is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments.
Step19: Sequential container
Step21: Layers
Step22: This one is probably the hardest but as others only takes 5 lines of code in total.
Step23: Implement dropout. The idea and implementation is really simple
Step24: Activation functions
Step25: Implement Leaky Rectified Linear Unit. Expriment with slope.
Step31: Criterions
Step32: The MSECriterion, which is basic L2 norm usually used for regression, is implemented here for you.
Step33: You task is to implement the ClassNLLCriterion. It should implement multiclass log loss. Nevertheless there is a sum over y (target) in that formula,
|
1,318
|
<ASSISTANT_TASK:>
Python Code:
# Laden der entsprechenden Module (kann etwas dauern !)
# Wir laden die Module offen, damit man einmal sieht, was da alles benötigt wird
# Allerdings aufpassen, dann werden die Module anderst angesprochen wie beim Standard
# zum Beispiel pyplot und nicht plt
from matplotlib import pyplot
pyplot.rcParams["figure.figsize"] = (15,12)
%matplotlib inline
import numpy as np #wird allerdings nicht benötigt
from pandas import read_csv
from pandas import set_option
from pandas.plotting import scatter_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
#Laden der Daten [12100 Datensätze mit 10 Sensoren und einer Spalte Label (12100x11)Matrix]
url = 'sensordaten-10.csv'
datensatz = read_csv(url, sep=';', header=0)
# Ausgabe df.shape
print(datensatz.shape)
# Ausgabe df.dtypes
# Spalte enthält die Classifikation R oder M
set_option('display.max_rows', 50)
print(datensatz.dtypes)
# Ausgabe df.head mit vergösserter display width
set_option('display.width', 100)
print(datensatz.head(20))
# Ausgabe df.describe() mit 4 Nachkomma Stellen
set_option('precision', 4)
print(datensatz.describe())
# Ausgabe der Klassen Verteilung in der Spalte 60
print(datensatz.groupby('Label').size())
# Ausgabe Histogramm
pyplot.rcParams["figure.figsize"] = (15,12)
datensatz.hist()
pyplot.show()
from numpy import set_printoptions
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
# Übergabe der Dtaen
array = datensatz.values
X = array[:,0:10]
Y = array[:,10]
# Feature Extraktion
test = SelectKBest(score_func=chi2, k=5)
fit = test.fit(X, Y)
# Zusammenfassung der Ergebnisse
set_printoptions(precision=3)
print(fit.scores_)
features = fit.transform(X)
# Ausgewählte Features
print(features[0:9,:])
# Rekursives Feature Engineering
# Laden des Moduls RFE
from sklearn.feature_selection import RFE
#Verwendung der Logistischen Regression als Algorithmus
from sklearn.linear_model import LogisticRegression
# Übergabe der Werte in datensatz an ein array2
array2 = datensatz.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X
X = array2[:,0:10]
Y = array2[:,10]
# feature extraction
model = LogisticRegression()
rfe = RFE(model, 3)
fit = rfe.fit(X, Y)
print("Num Features: %d" % fit.n_features_)
print("Selected Features: %s" % fit.support_)
print("Feature Ranking: %s" % fit.ranking_)
# Binärisierung der Daten
# Laden des Moduls Binarizer aus sklearn.preprocessing
from sklearn.decomposition import PCA
# Übergabe der Werte in datensatz an ein array3
array3 = datensatz.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X
X = array3[:,0:10]
Y = array3[:,10]
# feature extraction
pca = PCA(n_components=3)
fit = pca.fit(X)
# summarize components
print("Explained Variance: %s" % fit.explained_variance_ratio_)
print(fit.components_)
# Abschätzung der Bedeutung von Merkmalen
# Laden des Moduls ExtraTreesClassifier aus sklearn.ensemble
from sklearn.ensemble import ExtraTreesClassifier
# Übergabe der Werte in datensatz an ein array4
array4 = datensatz.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X
X = array4[:,0:10]
Y = array4[:,10]
# feature extraction
model = ExtraTreesClassifier()
model.fit(X, Y)
print(model.feature_importances_)
# Feature Importance with Extra Trees Classifier
from sklearn.ensemble import ExtraTreesClassifier
# Übergabe des Dateinamens an die Variable dateiname
dateiname = 'pima-indians-diabetes.data.csv'
# Festlegen der Spalten Namen für den DataFrame
namen = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
# Einlesen der Daten in einen panda DataFrame mit read_csv()
df = read_csv(dateiname, names=namen)
# Übergabe der Werte in df an ein array5
array5 = df.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X - hier steht die Klasse in Spalte 9
X = array5[:,0:8]
Y = array5[:,8]
# feature extraction
model = ExtraTreesClassifier()
model.fit(X, Y)
print(model.feature_importances_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem Beschreibung
Step2: <h3>Beschreibende Statistik</h3>
Step3: <h3>Visualisierung der Daten</h3>
Step4: <h3>Univariate Feature Selektion</h3>
Step5: Im Sensor Datensatz sind also die Sensoren Sens-5, Sens-7, Sens-8, Sens-9 und Sens-10 besonders relevant.
Step6: Die rekursive Feature Elimination wählt die gleichen Sensoren aus wie bei der univariaten Auswahl.
Step7: <h3>Abschätzung der Bedeutung von Merkmalen</h3>
Step8: Die Abschätzung der Bedeutung von Features wählt die gleichen Sensoren aus wie bei der univariaten Auswahl.
|
1,319
|
<ASSISTANT_TASK:>
Python Code:
h, lam = sy.symbols('h, lambda', real=True, positive=True)
s, z = sy.symbols('s, z', real=False)
G = 1/(s-lam)
Y = G/s
Yp = sy.apart(Y, s)
Yp
from sympy.integrals.transforms import inverse_laplace_transform
from sympy.abc import t
inverse_laplace_transform(Yp, s, t)
lam = -0.5
h = 0.1
G = cm.tf([1], [1, -lam])
Gd = cm.c2d(G, h)
Hd = 1/lam * cm.tf([np.exp(lam*h)-1],[1, np.exp(lam*h)])
print(Gd)
print(Hd)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sampling and taking the z-transform of the step-response
|
1,320
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.random.random(10)
y = np.random.random(10)
plt.plot(x, y)
fig = plt.figure()
fig.canvas.get_supported_filetypes()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.array([4, 5, 6])
y = np.array([9, 4, 7])
plt.plot(x, y)
x1 = np.array([4, 5, 6])
y1 = np.array([9, 4, 7])
plt.plot(x1, y1)
x2 = np.array([1, 2, 4])
y2 = np.array([4, 6, 9])
plt.plot(x2, y2)
x = np.array([4, 5, 6])
y = np.array([9, 4, 7])
plt.scatter(x, y)
# This is a great function that gives me 50 evenly-spaced values from 0 to 10.
x = np.linspace(0, 10, 50)
dy = 0.8 # The error rate.
y = np.sin(x) + dy * np.random.random(50) # Adds a little bit of noise.
plt.errorbar(x, y, yerr = dy)
x = np.random.normal(size = 100)
plt.hist(x, bins = 20)
import scipy.misc
img = scipy.misc.face()
plt.imshow(img)
matrix = np.random.random((100, 100))
plt.imshow(matrix, cmap = "gray")
x = np.linspace(0, 10, 50) # 50 evenly-spaced numbers from 0 to 10
y = np.sin(x) # Compute the sine of each of these numbers.
plt.plot(x, y)
plt.xlabel("x") # This goes on the x-axis.
plt.ylabel("sin(x)") # This goes on the y-axis.
plt.title("Plot of sin(x)") # This goes at the top, as the plot title.
x = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10
y1 = np.sin(x) # Compute the sine of each of these numbers.
y2 = np.cos(x) # Compute the cosine of each number.
plt.plot(x, y1, label = "sin(x)")
plt.plot(x, y2, label = "cos(x)")
plt.legend(loc = 0)
x = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10
y = np.sin(x) # Compute the sine of each of these numbers.
plt.plot(x, y)
plt.xlim([-1, 11]) # Range from -1 to 11 on the x-axis.
plt.ylim([-3, 3]) # Range from -3 to 3 on the y-axis.
x = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10
y = np.sin(x) # Compute the sine of each of these numbers.
plt.plot(x, y, c = "cyan")
X1 = np.random.normal(loc = [-1, -1], size = (10, 2))
X2 = np.random.normal(loc = [1, 1], size = (10, 2))
plt.scatter(X1[:, 0], X1[:, 1], c = "black", marker = "v")
plt.scatter(X2[:, 0], X2[:, 1], c = "yellow", marker = "o")
matrix = np.random.normal(size = (100, 100))
plt.imshow(matrix, cmap = "gray")
plt.colorbar()
import seaborn as sns # THIS IS THE KEY TO EVERYTHING
x = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10
y = np.sin(x) # Compute the sine of each of these numbers.
plt.plot(x, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: By far, we'll use the plt object from the second import the most; that contains the main plotting library.
Step2: Note that you do NOT need to use plt.show()! When in "inline" mode, matplotlib will automatically render whatever the "active" figure is as soon as you issue some kind of plotting command.
Step3: Part 2
Step4: Matplotlib sees we've created points at (4, 9), (5, 4), and (6, 7), and it connects each of these in turn with a line, producing the above plot. It also automatically scales the x and y axes of the plot so all the data fit visibly inside.
Step5: They'll even be plotted in different colors. How nice!
Step6: We use the plt.scatter() function, which operates pretty much the same way as plt.plot(), except it puts dots in for each data point without drawing lines between them.
Step7: You use the yerr argument of the function plt.errorbar() in order to specify what your error rate in the y-direction is. There's also an xerr optional argument, if your error is actually in the x-direction.
Step8: plt.hist() has only 1 required argument
Step9: The plt.imshow() method takes as input a matrix and renders it as an image. If the matrix is 3D, it considers this to be an image in RGB format (width, height, and 3 color dimensions) and uses that information to determine colors. If the matrix is only 2D, it will consider it to be grayscale.
Step10: We built a random matrix matrix, and as you can see it looks exactly like that
Step11: Legends
Step12: First, you'll notice that the plt.plot() call changed a little with the inclusion of an optional argument
Step13: This can potentially help center your visualizations, too.
Step14: If you're making scatter plots, it can be especially useful to specify the type of marker in addition to the color you want to use. This can really help differentiate multiple scatter plots that are combined on one figure.
Step15: Finally, when you're rendering images, and especially matrices, it can help to have a colorbarthat shows the scale of colors you have in your image plot.
Step16: The matrix is clearly still random, but the colorbar tells us the values in the picture range from around -3.5 or so to +4, giving us an idea of what's in our data.
|
1,321
|
<ASSISTANT_TASK:>
Python Code:
# @markdown Only execute if not already installed and running a cloud runtime
!pip install -q timesketch_api_client
!pip install -q vt-py nest_asyncio pandas
!pip install -q picatrix
# @title Import libraries
# @markdown This cell will import all the libraries needed for the running of this colab.
import re
import requests
import pandas as pd
from timesketch_api_client import config
from picatrix import notebook_init
import vt
import nest_asyncio # https://github.com/VirusTotal/vt-py/issues/21
nest_asyncio.apply()
notebook_init.init()
# @title VirusTotal Configuration
# @markdown In order to be able to lookup domains/IPs/samples using VirtusTotal we need to get an API key.
# @markdown
# @markdown If you don't have an API key you must sign up to [VirusTotal Community](https://www.virustotal.com/gui/join-us).
# @markdown Once you have a valid VirusTotal Community account you will find your personal API key in your personal settings section.
VT_API_KEY = '' # @param {type: "string"}
# @markdown If you don't have the API key you will not be able to use the Virustotal API
# @markdown to lookup information.
# @title Declare functions
# @markdown This cell will define few functions that we will use throughout
# @markdown this colab. This would be better to define outside of the notebook
# @markdown in a library that would be imported, but we keep it here for now.
def print_dict(my_dict, space_before=0):
Print the content of a dictionary.
max_len = max([len(x) for x in my_dict.keys()])
spaces = ' '*space_before
format_str = f'{spaces}{{key:{max_len}s}} = {{value}}'
for key, value in my_dict.items():
if isinstance(value, dict):
print(format_str.format(key=key, value=''))
print_dict(value, space_before=space_before + 8)
elif isinstance(value, list):
value_str = ', '.join(value)
print(format_str.format(key=key, value=value_str))
else:
print(format_str.format(key=key, value=value))
def ip_info(address):
Print out information about an IP address using the VT API.
url = 'https://www.virustotal.com/vtapi/v2/ip-address/report'
params = {
'apikey': VT_API_KEY,
'ip': address}
response = requests.get(url, params=params)
j_obj = response.json()
def _print_stuff(part):
print('')
header = part.replace('_', ' ').capitalize()
print(f'{header}:')
for item in j_obj.get(part, []):
print_dict(item, 2)
_print_stuff('resolutions')
_print_stuff('detected_urls')
_print_stuff('detected_referrer_samples')
_print_stuff('detected_communicating_samples')
_print_stuff('detected_downloaded_samples')
# @markdown Get a copy of the Timesketch client object.
# @markdown Parameters to configure the client:
# @markdown + host_uri: https://demo.timesketch.org
# @markdown + username: demo
# @markdown + auth_mode: timesketch (username/password)
# @markdown + password: demo
ts_client = config.get_client(confirm_choices=True)
for sketch in ts_client.list_sketches():
if not sketch.name.startswith('Szechuan'):
continue
print('We found the sketch to use')
print(f'[{sketch.id}] {sketch.name} - {sketch.description}')
break
%timesketch_set_active_sketch 6
timesketch_list_saved_searches_func?
for status in sketch.get_analyzer_status():
print(f'Analyzer: {status["analyzer"]} - status: {status["status"]}')
print(f'Results: {status["results"]}')
print('')
search_query = timesketch_query_func(
'parser:"winreg/windows_version"',
fields='datetime,key_path,data_type,message,timestamp_desc,parser,display_name,product_name,hostname,timestamp_desc'
)
cur_df = search_query.table
cur_df[['hostname', 'product_name']]
cur_df[cur_df.hostname == 'CITADEL-DC01'].product_name.value_counts()
cur_df[cur_df.hostname == 'DESKTOP-SDN1RPT'].product_name.value_counts()
cur_df = timesketch_query_func(
'HKEY_LOCAL_MACHINE*System*Select AND hostname:"CITADEL-DC01"',
fields=(
'datetime,key_path,data_type,message,timestamp_desc,parser,display_name,'
'product_name,hostname,timestamp_desc,values')
).table
for key, value in cur_df[['key_path', 'values']].values:
print(f'Key: {key}')
print(f'Value: {value}')
cur_df['current_value'] = cur_df['values'].str.extract(r'Current: \[[A-Z_]+\] (\d) ')
cur_df[['key_path', 'current_value']]
cur_df = timesketch_query_func(
'TimeZoneInformation AND hostname:"CITADEL-DC01"',
fields='datetime,key_path,data_type,message,timestamp_desc,parser,display_name,product_name,hostname,timestamp_desc,configuration'
).table
cur_df
pd.set_option('max_colwidth', 400)
cur_df[cur_df.key_path.str.contains('ControlSet001')][['configuration']]
lines = []
for value in cur_df[cur_df.key_path.str.contains('ControlSet001')]['configuration'].values:
items = value.split(':')
line_dict = {}
key = items[0]
for item in items[1:-1]:
*values, new_key = item.split()
line_dict[key] = ' '.join(values)
key = new_key
line_dict[key] = items[-1]
lines.append(line_dict)
time_df = pd.DataFrame(lines)
time_df
%timesketch_available_aggregators
params = {
'field': 'Source',
'limit': 10,
'supported_charts': 'hbarchart',
'chart_title': 'Top 10 Source IP',
}
aggregation = timesketch_run_aggregator_func(
'field_bucket', parameters=params
)
aggregation.chart
# Remove the commend and run this code if you are running in colab
# but have a local Jupyter kernel running:
# alt.renderers.enable('colab')
# Remove this comment if you are running in Jupyter and the chart is not displayed
# alt.renderers.enable('notebook')
aggregation.table
params = {
'field': 'Destination',
'limit': 10,
'supported_charts': 'hbarchart',
'chart_title': 'Top 10 Source IP',
}
aggregation = timesketch_run_aggregator_func('field_bucket', parameters=params)
aggregation.chart
attacker_dst = timesketch_query_func(
'Source:"194.61.24.102" AND data_type:"pcap:wireshark:entry"',
fields='datetime,message,timestamp_desc,Destination,DST port,Source,Protocol,src port').table
attacker_dst.head(10)
search_obj = timesketch_query_func(
'Source:"194.61.24.102" AND data_type:"pcap:wireshark:entry"',
fields='datetime,message,timestamp_desc,Destination,DST port,Source,Protocol,src port')
search_obj.max_entries = 150000
attacker_dst = search_obj.table
attacker_dst.head(10)
attacker_dst.shape
attacker_group = attacker_dst[['DST port','Destination', 'Protocol']].groupby(
['DST port','Destination'], as_index=False)
attacker_dst_mytable = attacker_group.count()
attacker_dst_mytable.rename(columns={'Protocol': 'Count'}, inplace=True)
attacker_dst_mytable.sort_values(by=['Count'], ascending=False)
attacker_dst = timesketch_query_func(
'194.61.24.102 AND data_type:"scapy:pcap:entry"',
fields='datetime,message,timestamp_desc,ip_flags,ip_dst,ip_src,payload,tcp_flags,tcp_seq,tcp_sport,tcp_dport,tcp_window').table
attacker_dst.head(10)
params = {
'field': 'ip_src',
'query_string': 'ip_flags:"evil"',
'supported_charts': 'hbarchart',
'chart_title': 'Source IPs with "evil" bit set',
}
aggregation = timesketch_run_aggregator_func('query_bucket', parameters=params)
aggregation.table
name = 'Source IPs with "evil" bit set'
aggregation.name = name
aggregation.title = name
aggregation.save()
attacker_dst.iloc[0].message
attacker_dst.message.str.slice(start=0, stop=30).unique()
attacker_packages = attacker_dst.message.str.slice(start=30).str.split('|', expand=True)
attacker_packages.head(3)
attacker_packages = attacker_packages[[0, 1, 2]]
attacker_packages.columns = ['ether', 'ip', 'transport']
attacker_packages.head(3)
attacker_packages[['transport']].head(10)
def parse_row(row):
items = row.split()
protocol = items[0][1:]
line_dict = {
'protocol': protocol
}
for item in items[1:]:
key, _, value = item.partition('=')
if key == 'options':
# We don't want options nor anything after that.
break
line_dict[key] = value
return line_dict
proto_df = pd.DataFrame(list(attacker_packages['transport'].apply(parse_row).values))
proto_df['datetime'] = attacker_dst['datetime']
proto_df.head(3)
proto_df[['datetime', 'protocol', 'type', 'dport']].head(10)
attacker_dst = timesketch_query_func(
'(194.61.24.102 AND 10.42.85.10) AND data_type:"scapy:pcap:entry"',
fields='datetime,message,timestamp_desc,ip_flags,ip_dst,ip_src,payload,tcp_flags,tcp_seq,tcp_sport,tcp_dport,tcp_window', max_entries=500000).table
attacker_dst.head(10)
attacker_packages = attacker_dst.message.str.slice(start=30).str.split('|', expand=True)
attacker_packages = attacker_packages[[0, 1, 2]]
attacker_packages.columns = ['ether', 'ip', 'transport']
proto_df = pd.DataFrame(list(attacker_packages['transport'].apply(parse_row).values))
proto_df['datetime'] = attacker_dst['datetime']
proto_df[['datetime', 'protocol', 'type', 'dport']].head(10)
evtx_df = timesketch_query_func(
'194.61.24.102 AND data_type:"windows:evtx:record"', fields='*').table
evtx_df.head(3)
evtx_df.username.value_counts()
evtx_df.event_identifier.value_counts()
evtx_df.source_name.value_counts()
evtx_df[evtx_df.username == 'Administrator'][['datetime', 'event_identifier', 'tag', 'logon_type', 'source_address']]
timesketch_query_func(
'source_address:"194.61.24.102" AND data_type:"windows:evtx:record"',
fields='logon_type,username').table[['logon_type', 'username']].drop_duplicates()
timeframe_df = timesketch_query_func(
'*', start_date='2020-09-19T01:00:00', end_date='2020-09-19T04:20:00', max_entries=50000
).table
max_entries = 1500000
timeframe_df = timesketch_query_func(
'*', start_date='2020-09-19T01:00:00', end_date='2020-09-19T04:20:00', max_entries=max_entries, fields='*'
).table
timeframe_df.shape
timeframe_df.data_type.value_counts()
group = timeframe_df[
timeframe_df.data_type == 'windows:evtx:record'][['event_identifier', 'timestamp', 'source_name']].groupby(
by=['event_identifier', 'source_name'], as_index=False
)
group.count().rename(columns={'timestamp': 'count'}).sort_values('count', ascending=False)
timeframe_evtx = timeframe_df[timeframe_df.data_type == 'windows:evtx:record'].copy()
timeframe_evtx['event_identifier'] = timeframe_evtx.event_identifier.fillna(value=0)
timeframe_evtx[timeframe_evtx.event_identifier == 36888].strings.str.join('|').unique()
timeframe_evtx = timeframe_df[timeframe_df.data_type == 'windows:evtx:record'].copy()
timeframe_evtx['event_identifier'] = timeframe_evtx.event_identifier.fillna(value=0)
timeframe_evtx[(timeframe_evtx.event_identifier == 131) & (timeframe_evtx.source_name == 'Microsoft-Windows-RemoteDesktopServices-RdpCoreTS')].strings.str.join('|').unique()
timeframe_df.data_type.value_counts()
timeframe_df[timeframe_df.data_type == 'windows:prefetch:execution'][['datetime', 'executable', 'run_count']]
timeframe_df[timeframe_df.data_type == 'windows:prefetch:execution'].executable.value_counts()
timeframe_df[(timeframe_df.data_type == 'windows:prefetch:execution') & (~timeframe_df.run_count.isna()) & (timeframe_df.run_count < 2)][['executable', 'run_count']].drop_duplicates()
timeframe_evtx[(timeframe_evtx.event_identifier == 7045) & (timeframe_evtx.source_name == 'Service Control Manager')]['strings']
attacker_dst_http = timesketch_query_func(
'(194.61.24.102 AND 10.42.85.10) AND data_type:"scapy:pcap:entry" AND *http* AND *GET*',
fields='datetime,message,timestamp_desc,ip_flags,ip_dst,ip_src,payload,tcp_flags,tcp_seq,tcp_sport,tcp_dport,tcp_window').table
attacker_dst_http.head(4)
attacker_dst.shape
attacker_dst_http[attacker_dst_http.message.str.contains(r'GET|POST')].message.str.extract(r'<Raw load=([^|]+)')
for v in attacker_dst_http[attacker_dst_http.message.str.contains(r'GET|POST')].message.str.extract(r'<Raw load=([^|]+)').values:
value = str(v)
print(value.replace('\\\\r\\\\n', '\n'))
coreupdater = timesketch_query_func(
'coreupdater.exe AND data_type:"fs:stat"',
fields='file_size,filename,hostname,message,data_type,datetime,sha256_hash').table
coreupdater.sort_values(by=['datetime'], ascending=True, inplace=True)
coreupdater.head(10)
coreupdater[['hostname', 'filename', 'sha256_hash']].drop_duplicates()
if VT_API_KEY:
vt_client = vt.Client(VT_API_KEY)
else:
vt_client = None
hash_value = list(coreupdater[coreupdater.filename == '\Windows\System32\coreupdater.exe'].sha256_hash.unique())[0]
if vt_client:
file_info = vt_client.get_object(f'/files/{hash_value}')
print_dict(file_info.last_analysis_stats)
if file_info:
stars = '*'*10
print(f'{stars}Summary{stars}')
print('')
print_dict(file_info.sigma_analysis_summary)
print('')
print(f'{stars}Analysis Stats{stars}')
print('')
print_dict(file_info.sigma_analysis_stats)
else:
print('No VT API key defined, you\'ll need to manually loookup the information')
if file_info:
print_dict(file_info.exiftool)
if file_info:
print_dict(file_info.last_analysis_results)
coreupdater = timesketch_query_func(
'coreupdater.exe AND NOT data_type:"fs:stat"', fields='*').table
coreupdater.sort_values(by=['datetime'], ascending=True, inplace=True)
coreupdater.head(10)
coreupdater.loc[coreupdater['data_type'] == 'autoruns:record']
coreupdater.loc[coreupdater['data_type'] == 'windows:registry:service'][['datetime', 'hostname', 'key_path', 'image_path', 'name', 'start_type', 'service_type', 'values']]
coreupdater.loc[coreupdater['data_type'] == 'pcap:wireshark:entry'][['datetime', 'message']]
%%timesketch_query search_obj --fields datetime,ip_src,message,timestamp_desc,tcp_dport,ip_dst --max_entries 150000
ip_src:10.42.85* AND NOT ip_dst:10.42.85* AND data_type:"scapy:pcap:entry"'
pd.options.display.max_colwidth = 200
data = search_obj.table
data[['datetime','ip_src', 'ip_dst', 'tcp_dport']]
mytable = data.groupby(['ip_src','ip_dst']).size().to_frame('count').reset_index()
mytable.sort_values(by=['count'], ascending=False, inplace=True)
mytable
ip_set = set()
list(map(ip_set.add, data['ip_src'].unique()))
list(map(ip_set.add, data['ip_dst'].unique()))
print(f'Found: {len(ip_set)} IPs (including internal)')
ip_info('194.61.24.102')
lines = []
url = 'https://www.virustotal.com/vtapi/v2/ip-address/report'
for ip_address in ip_set:
if ip_address.startswith('10.'):
continue
params = {
'apikey': VT_API_KEY,
'ip': ip_address,
}
response = requests.get(url, params=params)
j_obj = response.json()
line_dict = {
'resolutions': [x.get('hostname') for x in j_obj.get('resolutions', [])],
'ip': ip_address,
'detected_urls': j_obj.get('detected_urls'),
'detected_downloaded_samples': [
x.get('sha256') for x in j_obj.get('detected_downloaded_samples', [])] or None,
'detected_referrer_samples': [
x.get('sha256') for x in j_obj.get('detected_referrer_samples', [])] or None,
'detected_communicating_samples': [
x.get('sha256') for x in j_obj.get('detected_communicating_samples', [])] or None,
}
lines.append(line_dict)
ip_df = pd.DataFrame(lines)
ip_df.head(3)
hash_value
ip_df[~ip_df.detected_referrer_samples.isna() & ip_df.detected_referrer_samples.str.contains(hash_value)]
ip_df[~ip_df.detected_communicating_samples.isna() & ip_df.detected_communicating_samples.str.contains(hash_value)]
ip_df[~ip_df.detected_downloaded_samples.isna() & ip_df.detected_downloaded_samples.str.contains(hash_value)]
ip_df[~ip_df.detected_downloaded_samples.isna()]
ip_df[~ip_df.detected_downloaded_samples.isna()]['ip'].value_counts()
timesketch_query_func(
'"This program cannot be run in DOS" AND data_type:"scapy:pcap:entry"',
fields='*'
).table[['ip_dst', 'ip_src']].drop_duplicates()
search_obj = %timesketch_query --fields * username:"Administrator"
other_df = search_obj.table
other_df.hostname.value_counts()
other_df[other_df['datetime'].dt.strftime('%Y%m%d') == '20200919'][['datetime', 'hostname', 'logon_type']]
other_df.data_type.value_counts()
other_df[other_df.data_type == 'windows:registry:mstsc:connection'][['datetime', 'display_name', 'hostname', 'message']]
day_filter = other_df.datetime.dt.strftime('%Y%m%d') == '20200919'
filtered_view = other_df[day_filter & other_df.tag.str.join(',').str.contains('logon-event')][[
'datetime', 'computer_name', 'logon_process', 'logon_type', 'source_address', 'source_username', 'windows_domain', 'username', 'workstation']]
filtered_view
filtered_view.workstation.value_counts()
filtered_view[filtered_view.workstation == 'kali']
filtered_view[filtered_view.logon_type == 'RemoteInteractive']
secret_files = timesketch_query_func('secret', fields='*').table
secret_files.data_type.value_counts()
secret_files.parser.value_counts()
secret_files[secret_files.data_type == 'msie:webcache:container'][['datetime', 'timestamp_desc', 'url']]
secret_files[secret_files.message.str.contains(r'FileShare\/Secret')]['message'].unique()
secret_files[secret_files.message.str.contains(r'FileShare\/Secret')][['timestamp_desc', 'message']].drop_duplicates()
secret_files[secret_files.message.str.contains(r'FileShare\/Secret') & secret_files['timestamp_desc'] != 'Expiration Time'][['datetime', 'message']].drop_duplicates()
secret_files[secret_files.message.str.contains(r'FileShare\/Secret')].message.str.extract(r'(FileShare\/Secret\/[^ ]+)')[0].unique()
beth_df = secret_files[secret_files.message.str.contains('beth', case=False)]
beth_df.data_type.value_counts()
beth_df[beth_df.data_type == 'fs:stat'][['datetime', 'display_name', 'timestamp_desc']]
beth_df[beth_df.data_type == 'olecf:dest_list:entry'][['datetime', 'hostname', 'timestamp_desc', 'path']]
beth_df[beth_df.data_type.str.contains('windows:shell_item:file_entry|msie:webcache:container')][[
'datetime', 'data_type', 'source_long', 'timestamp_desc', 'url', 'long_name', 'origin', 'shell_item_path']]
search_obj = %timesketch_query --fields * username:"Administrator" AND tag:"logon-event"
admin_login = search_obj.table
admin_login.shape
admin_login['datetime'] = pd.to_datetime(admin_login['datetime'])
admin_login[admin_login.datetime.dt.strftime('%Y%m%d') == '20200918'][[
'datetime', 'logon_type', 'source_address', 'source_username', 'username', 'windows_domain', 'workstation']]
%%timesketch_query search_obj --fields *
*.zip OR *.tar.gz OR *.tgz OR *.tbz OR *.tar.bz2 OR *.cab OR *.7a
zip_df = search_obj.table
zip_df.columns
zip_df.data_type.value_counts()
zip_df[~zip_df.filename.isna() & (zip_df.filename.str.contains('.zip$'))]['filename'].value_counts()
zip_day = zip_df[zip_df['datetime'].dt.strftime('%Y%m%d') == '20200919']
zip_day['filename'].value_counts()
zip_day[zip_day.data_type == 'windows:lnk:link'][['link_target', 'local_path']].drop_duplicates()
zip_day[zip_day.data_type == 'fs:ntfs:usn_change'][['filename']].drop_duplicates()
timestomp_df = timesketch_query_func(
'data_type:"fs:stat" AND (*secret* OR *zip* OR *coreupdater* OR *Szechuan*)',
fields='datetime,timestamp,timestamp_desc,filename,hostname,inode').table
timestomp_df.shape
timestomp_df[timestomp_df.timestamp.astype('str').str.endswith('000000')]
secret_timestomp_df = timesketch_query_func(
'"File reference: 87111-" AND data_type:"fs:ntfs:usn_change"',
fields='data_type,datetime,filename,hostname,inode,message').table
secret_timestomp_df.sort_values('datetime')
more_timestomp_df = timesketch_query_func(
'(("File reference: 87111-" AND data_type:"fs:ntfs:usn_change") OR (inode:87111)) AND hostname:"CITADEL-DC01"',
fields='data_type,datetime,timestamp,timestamp_desc,filename,hostname,inode,message').table
more_timestomp_df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: <a href="https
Step3: Now that we've got a copy of the TS client we need to get to the sketch.
Step4: OK, sketch nr 6 is the one that we are after, let's set that as the active sketch. This is something that the Timesketch picatrix magics expect, that is to first set the active sketch that you will be using. After that all the magics don't need sketch definitions.
Step5: To learn more about picatrix and how it works, please use the magic %picatrixmagics and see what magics are available and then use %magic --help or magic_func? to see more information about that magic.
Step6: Pre-Thoughts
Step7: From there you can get a glance at what has analysis has been done on the dataset, and what the results were.. for instance that login was completed and it found several logon and logoff entries.
Step8: So we now have the all the data, we can read the data from the table or do one more filtering to get the answer
Step9: What’s the Operating System of the Desktop?
Step10: What was the local time of the Server?
Step11: Now let's look at what the value is set for the key.
Step12: We can parse this out a bit more if we want to, or just read from there that the current value is 1
Step13: The current one is set 1
Step14: Let's increase the column with for pandas, that will make it easier to read columns with longer text in them.
Step15: So we need to extract what is in TimeZoneKeyName, we can do this differently. For now we can just read the configuration field, and then split it into a dict and then construct a new DataFrame with these fields, that is taking a line that is key1
Step16: Let's look at the newly constructed data frame
Step17: Then we've got the time zone of the server, which is Pacific Standard Time
Step18: Now that we know what aggregators are available, let's start with aggregating the field Source, and get the top 10.
Step19: If you are viewing this as in Colab but connecting to a local runtime you may need to enable this in order to be able to view the charts
Step20: If you prefer to get the data frame instead of the chart you can call aggregation.table
Step21: Now let's look at the Destination field, same as before
Step22: We can clearly see that the 194.61.24.102 sticks out, so lets try to understand what this IP did. Also note that it is not common that a system from the internet tries to connect to a intranet IP.
Step23: OK, we can see that the API says we got 40k records returned but the search actually produced 128.328 records,so let's increase our max entries...
Step24: We got a fairly large table, let's look at the size
Step25: We will now need to do some aggregation on the data that we got, let's use pandas for that. For that there is a function called groupby where we can run aggregations.
Step26: Now we got a group, and to get a count, we can use the count() function of the group.
Step27: So we can already point out that there is a lot of traffic from this ip to 10.42.85.10 on port 3389which is used for Remote Desktop Protocol (RDP)
Step28: Let's look at a few entries here
Step29: What we can see here is that quite a bit of the information is in the message field that we need to decode.
Step30: We could even save this (if you have write access to the sketch, which the demo user does not have)
Step31: And now we could use this in a story for instance.
Step32: Now that we know that, let's first remove the <bound method... in the beginning. Let's check to see if it's the same across the board
Step33: OK, so it's the same, we can therefore just use the slice method to remove this part of the string. After that we can then split the string based on | which separates the protocols.
Step34: Let's explain what was done in the above syntax. First of all we used the slice method to cut the first 30 characters out of the messages field. What we are left with is the rest of the message string. We then use the split method to split the string, based on |, and adding the option of expand=True, which then expands the results into a separate dataframe (as an opposed to just a list).
Step35: We can see a lot of values there are marked as None.. and basically all the columns from 3 and up are not useful, so let's remove those. And then rename the remaining columns
Step36: And let's look at how this looks like now
Step37: Now let's look at what happened in the first few packages
Step38: What we can see here is that there is first an ICMP (Ping) then two HTTP/HTTPS Requests , another ICMP and then the 3389 traffic begins.
Step39: Let's look at it, but first let's add in the datetime, since these are the same records as we had in the original DF we can simply apply the datatime there.
Step40: So now if we look at the first few actions made
Step41: So you can see the first action here.
Step42: We can then do the same as we did before to break things down.
Step43: So we know that this seems to be a RDP connection from the IP 194.61.24.102. Let's look at login events
Step44: Let's get a quick overview of the data
Step45: Let's look at the Administrator logins here
Step46: So we can see here that the user Administrator was logged in remotely on 2020-09-19 quite a few times, all between 3 and 4 am UTC.
Step47: Does not look like it. Only administrator. But now we've got a timeframe to search for.
Step48: OK, we can see that in this timeframe we have 925k records, but we only got back 50k, so let's re-run this and increase the size
Step49: And now look at the size
Step50: And to look at what we've got here
Step51: Let's start by looking at what type of EVTX records we are seeing
Step52: The two most common alerts here are Schannel/36888 (A Fatal Alert Was Generated) and Schannel/36874 (An SSL Connection Request Was Received From a Remote Client Application, But None of the Cipher Suites Supported by the Client Application Are Supported by the Server).
Step53: Let's ignore those for a while, but we do see a lot of RDP connections
Step54: It all seems to be our infamous IP address. That is brute forcing RDP.
Step55: We see some windows
Step56: Let's look at this another way
Step57: Or by using run count as an indicator of something that is rare
Step58: Here we see few applications that we may want to take a look at...
Step59: OK, here we do see some services being created, onoe of which is to create an auto start service called coreupdater. This is one of the files that we saw only executed once. Let's take a closer look at coreupdater.
Step60: Let's look at HTTP traffic
Step61: Lets look at this again, this time splitting on the new lines, etc..
Step62: So we are doing a HTTP request to
Step63: So we have all file entries with the filename coreupdater.exe. What is interesting here is that it is on both systems.
Step64: We can now use the hash to lookup on Virustotal
Step65: Let's extract the hash value
Step66: And now we can look up the data.
Step67: Let's look at some of the summary
Step68: This clearly does not look good. Let's look at some information here
Step69: This does not look very good. Lets look for other events where coreupdater.exe is present.
Step70: Ok there is a lot to see here, lets start with the Top event, the autoruns are mathing the md5 sum we discovered earlier.
Step71: Then we have the service install events (note that it happened on both systems)
Step72: And we have the GET events again
Step73: What malicious IP Addresses were involved?
Step74: Let's print out an aggregation of these IP pairs
Step75: The vast majority is the 194. IP, let's get a list of all external IPs here
Step76: Let's look up the IP from earlier
Step77: We can now look up all of the IPs, and put it into a data frame
Step78: Now we have a data frame, let's look at it briefly
Step79: We had our hash value from last time
Step80: Let's see if that shows up
Step81: That does not seem to show up in our data frame, but let's see if there are other potentially malicious IPs in the data set.
Step82: There are several here.. let's get a summary
Step83: Let's look for some strings in nthe data...
Step84: There is another IP address here that we should look for... 203.78.103.109
Step85: Let's look at what systems are involved
Step86: Let's look at this a bit more
Step87: Let's look at the different data types we've got
Step88: Let's look at the content of the registry entry for MSTC connection
Step89: We can see that it seems that the attacker did enter the desktop (DESKTOP-SDN1RPT) using RDP from the Domain Controller (that we already established that they brute-forced to gain access)
Step90: We can look at fewer records here, or some summaries
Step91: So we go from kali to both the DC server and the Desktop.
Step92: Likely that the attacker used Kali as their attack platform or operating system of choice for the attacker. Let's look at the remote interactive logons
Step93: Here is is clear that the attacker did RDP into CITADEL-DC01 from the IP of 194.61.24.102 and from there they entered DESKTOP-SDN1RPT (also via RDP using the same credentials).
Step94: Let's take a look at what we've got
Step95: and
Step96: Let's look at some of these browser entries...
Step97: Uh that looks interesting
Step98: Let's look at some of the details here
Step99: Let's remove the expiration time from this...
Step100: Let's extract unique filenames here
Step101: There are more files here than just the stolen szechuan sauce....
Step102: Start with the simple fs stat
Step103: Let's look at another source of timestamped information
Step104: The Administrator account accesses the file at "2020-09-18 20
Step105: Did the attacker steal or access any other sensitive files? If so, what times?
Step106: Let's limit ourselves to the day of the attack
Step107: OK, there are some interesting files here... most notably the Secret.zip the temporary Secret.zip, the My Social Security Number.zip and loot.zip
Step108: These should be looked at a bit more closely.
Step109: Let's see how many results we got
Step110: And finally, let's look for any timestamps the end with 0s (this could have false positivies, but it's literally a 1 in a million chance).
Step111: There we go! Beth_Secret.txt has timestamps that lack fractional seconds; that's strange. Let's take a closer look to see if anything else looks strange about this file.
Step112: Interesting! We do have change journal records for that file. Let's work through these results (sorted oldest to newest)
|
1,322
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('/home/pi/minecraft-programming')
import mini_game as pepgame
game = pepgame.ShapeGame("TETRAHEDRON")
game.startGame()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Level 0
|
1,323
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pylab as plt
import numpy as np
import sys
sys.path.append('../')
from pyphot import astropy as pyphot
from pyphot.svo import get_pyphot_astropy_filter as get_filter_from_svo
lst = ["2MASS/2MASS.J", "2MASS/2MASS.H", "2MASS/2MASS.Ks",
"HST/ACS_WFC.F475W", "HST/ACS_WFC.F814W"]
filters = [get_filter_from_svo(k) for k in lst]
filters
# get the internal default library of passbands filters
filters[0].info(show_zeropoints=True)
# convert to magnitudes
import numpy as np
# We'll use Vega spectrum as example
from pyphot.astropy import Vega
vega = Vega()
f = filters[-1]
# compute the integrated flux through the filter f
# note that it work on many spectra at once
fluxes = f.get_flux(vega.wavelength, vega.flux, axis=-1)
# Note that fluxes is now with units of erg/s/cm2/AA
# pyphot gives Vega in flam and can convert between flux density units.
fluxes, vega.wavelength, vega.flux
# convert to vega magnitudes
mags = -2.5 * np.log10(fluxes.value) - f.Vega_zero_mag
print("Vega magnitude of Vega in {0:s} is : {1:f} mag".format(f.name, mags))
mags = -2.5 * np.log10(fluxes.value) - f.AB_zero_mag
print("AB magnitude of Vega in {0:s} is : {1:f} mag".format(f.name, mags))
mags = -2.5 * np.log10(fluxes.value) - f.ST_zero_mag
print("ST magnitude of Vega in {0:s} is : {1:f} mag".format(f.name, mags))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quick Start
Step2: Suppose one has a calibrated spectrum and wants to compute the vega magnitude throug the HST WFC3 F110W passband,
|
1,324
|
<ASSISTANT_TASK:>
Python Code:
#$HIDE_INPUT$
import folium
from folium import Marker, GeoJson
from folium.plugins import HeatMap
import pandas as pd
import geopandas as gpd
releases = gpd.read_file("../input/geospatial-learn-course-data/toxic_release_pennsylvania/toxic_release_pennsylvania/toxic_release_pennsylvania.shp")
releases.head()
stations = gpd.read_file("../input/geospatial-learn-course-data/PhillyHealth_Air_Monitoring_Stations/PhillyHealth_Air_Monitoring_Stations/PhillyHealth_Air_Monitoring_Stations.shp")
stations.head()
print(stations.crs)
print(releases.crs)
# Select one release incident in particular
recent_release = releases.iloc[360]
# Measure distance from release to each station
distances = stations.geometry.distance(recent_release.geometry)
distances
print('Mean distance to monitoring stations: {} feet'.format(distances.mean()))
print('Closest monitoring station ({} feet):'.format(distances.min()))
print(stations.iloc[distances.idxmin()][["ADDRESS", "LATITUDE", "LONGITUDE"]])
two_mile_buffer = stations.geometry.buffer(2*5280)
two_mile_buffer.head()
# Create map with release incidents and monitoring stations
m = folium.Map(location=[39.9526,-75.1652], zoom_start=11)
HeatMap(data=releases[['LATITUDE', 'LONGITUDE']], radius=15).add_to(m)
for idx, row in stations.iterrows():
Marker([row['LATITUDE'], row['LONGITUDE']]).add_to(m)
# Plot each polygon on the map
GeoJson(two_mile_buffer.to_crs(epsg=4326)).add_to(m)
# Show the map
m
# Turn group of polygons into single multipolygon
my_union = two_mile_buffer.geometry.unary_union
print('Type:', type(my_union))
# Show the MultiPolygon object
my_union
# The closest station is less than two miles away
my_union.contains(releases.iloc[360].geometry)
# The closest station is more than two miles away
my_union.contains(releases.iloc[358].geometry)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You'll work with a dataset from the US Environmental Protection Agency (EPA) that tracks releases of toxic chemicals in Philadelphia, Pennsylvania, USA.
Step2: You'll also work with a dataset that contains readings from air quality monitoring stations in the same city.
Step3: Measuring distance
Step4: We also check the CRS to see which units it uses (meters, feet, or something else). In this case, EPSG 2272 has units of feet. (If you like, you can check this here.)
Step5: Using the calculated distances, we can obtain statistics like the mean distance to each station.
Step6: Or, we can get the closest monitoring station.
Step7: Creating a buffer
Step8: We use folium.GeoJson() to plot each polygon on a map. Note that since folium requires coordinates in latitude and longitude, we have to convert the CRS to EPSG 4326 before plotting.
Step9: Now, to test if a toxic release occurred within 2 miles of any monitoring station, we could run 12 different tests for each polygon (to check individually if it contains the point).
Step10: We use the contains() method to check if the multipolygon contains a point. We'll use the release incident from earlier in the tutorial, which we know is roughly 3781 feet to the closest monitoring station.
Step11: But not all releases occured within two miles of an air monitoring station!
|
1,325
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 200
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
#DONE
try:
x = x.astype('float32')
return np.array(x/255)
except:
#In real life, handle better the exception
print('x is bad formatted, should be an array of numbers')
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
#DONE
nclasses = 10
ohv = []
ohv = np.zeros((len(x), nclasses), dtype = np.int)
for i in range(len(x)):
ohv[i,x[i]] = 1
return ohv
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# DONE
return tf.placeholder(tf.float32,(None, *image_shape), name = 'x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# DONE
return tf.placeholder(tf.float32,[None, n_classes], name = 'y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# DONE
return tf.placeholder(tf.float32, name = 'keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# DONE
input_shape = x_tensor.get_shape().as_list()
conv_strides_4D = [1, *conv_strides, 1]
pool_strides_4D = [1, *pool_strides, 1]
pool_ksize_4D = [1, *pool_ksize, 1]
weights = tf.Variable(tf.truncated_normal(shape=[*conv_ksize, int(input_shape[3]), int(conv_num_outputs)],mean=0, stddev=0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
conv_layer = tf.nn.conv2d(x_tensor, weights, conv_strides_4D, padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
conv_layer = tf.nn.max_pool(conv_layer, pool_ksize_4D, pool_strides_4D, padding='SAME')
return conv_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# DONE
shape = x_tensor.get_shape().as_list()
return tf.reshape(x_tensor,[-1,shape[1]*shape[2]*shape[3]])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# DONE
num_inputs = x_tensor.get_shape().as_list()[1]
weights = tf.Variable(tf.truncated_normal(shape=[num_inputs,num_outputs],mean=0, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
fc1 = tf.add(tf.matmul(x_tensor,weights), bias)
fc1 = tf.nn.relu(fc1)
return fc1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# DONE
num_inputs = x_tensor.get_shape().as_list()[1]
weights = tf.Variable(tf.truncated_normal(shape=[num_inputs,num_outputs],mean=0, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
out = tf.add(tf.matmul(x_tensor, weights), bias)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# DONE
1st Convolutional/max pool layer
conv1_num_outputs = 64
conv1_ksize = (3,3)
conv1_strides = (1,1)
pool1_ksize = (2,2)
pool1_strides = (2,2)
conv1 = conv2d_maxpool(x, conv1_num_outputs, conv1_ksize, conv1_strides, pool1_ksize, pool1_strides)
# I added to each conv layer an extra step to perform local normalization (idea taken from section 3.3 in paper:
# Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012).)
#I also took some ideas from: https://arxiv.org/pdf/1608.06037.pdf
conv1 = tf.nn.lrn(conv1)
2nd Convolutional/max pool layer
conv2_num_outputs = 128
conv2_ksize = (2,2)
conv2_strides = (1,1)
pool2_ksize = (2,2)
pool2_strides = (2,2)
conv2 = conv2d_maxpool(conv1, conv2_num_outputs, conv2_ksize, conv2_strides, pool2_ksize, pool2_strides)
conv2 = tf.nn.lrn(conv2)
# I decided to add this dropout here to reduce overfitting... even if I have te maxpool,
#I observed an improvement i nthe results.
conv2 = tf.nn.dropout(conv2, keep_prob)
3rd Convolutional/max pool layer
conv3_num_outputs = 256
conv3_ksize = (2,2)
conv3_strides = (1,1)
pool3_ksize = (2,2)
pool3_strides = (2,2)
conv3 = conv2d_maxpool(conv2, conv3_num_outputs, conv3_ksize, conv3_strides, pool3_ksize, pool3_strides)
conv3 = tf.nn.lrn(conv3)
# DONE
x_flat = flatten(conv3)
# DONE
1st Fully connected with Dropout layer
fc1_num_outputs = 1024
fc1 = fully_conn(x_flat, fc1_num_outputs)
fc1 = tf.nn.dropout(fc1, keep_prob)
2nd Fully connected with Dropout layer
fc2_num_outputs = 1024
fc2 = fully_conn(fc1, fc2_num_outputs)
fc2 = tf.nn.dropout(fc2, keep_prob)
Output layer
n_classes = 10
logits = output(fc2, n_classes)
# TODO: return output
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# DONE
session.run(optimizer, feed_dict={
x : feature_batch,
y: label_batch,
keep_prob: keep_probability
})
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# Done
loss = session.run(cost, feed_dict={
x : feature_batch,
y: label_batch,
keep_prob: keep_probability
})
accuracy = session.run(accuracy, feed_dict={
x : valid_features,
y: valid_labels,
keep_prob: keep_probability
})
print('\nLoss: {:>.6f} --------- Accuracy: {:.6f}%\n'.format(loss, accuracy*100))
pass
# TODO: Tune Parameters
epochs = 70
batch_size = 256
keep_probability = 0.5
# Note: 60 epochs might be enought seeing the results,
# but takes like 1 hour to re-run so I'll stay with this answer
# (I tested with 30, 40, 50, 70 and 100 and endless combinations of filters and max-pooling)
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
accuracy_array = []
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
batch_accuracy = 0.
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step38: Create Convolutional Model
Step41: Train the Neural Network
Step43: Show Stats
Step44: Hyperparameters
Step46: Train on a Single CIFAR-10 Batch
Step48: Fully Train the Model
Step51: Checkpoint
|
1,326
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='white')
from utils import decorate
from thinkstats2 import Pmf, Cdf
import thinkstats2
import thinkplot
def MakeNormalModel(values, label=''):
Plots a CDF with a Normal model.
values: sequence
cdf = thinkstats2.Cdf(values, label=label)
mean, var = thinkstats2.TrimmedMeanVar(values)
std = np.sqrt(var)
print('n, mean, std', len(values), mean, std)
xmin = mean - 4 * std
xmax = mean + 4 * std
xs, ps = thinkstats2.RenderNormalCdf(mean, std, xmin, xmax)
thinkplot.Plot(xs, ps, label='model', linewidth=4, color='0.8')
thinkplot.Cdf(cdf)
def MakeNormalPlot(values, label=''):
Generates a normal probability plot.
values: sequence
mean, var = thinkstats2.TrimmedMeanVar(values, p=0.01)
std = np.sqrt(var)
xs = [-5, 5]
xs, ys = thinkstats2.FitLine(xs, mean, std)
thinkplot.Plot(xs, ys, color='0.8', label='model')
xs, ys = thinkstats2.NormalProbability(values)
thinkplot.Plot(xs, ys, '+', alpha=0.3, label=label)
%time gss = pd.read_hdf('gss.hdf5', 'gss')
gss.shape
gss.head()
def replace_invalid(df):
df.realinc.replace([0], np.nan, inplace=True)
df.educ.replace([98,99], np.nan, inplace=True)
# 89 means 89 or older
df.age.replace([98, 99], np.nan, inplace=True)
df.cohort.replace([9999], np.nan, inplace=True)
df.adults.replace([9], np.nan, inplace=True)
replace_invalid(gss)
cdf_age = Cdf(gss.age)
thinkplot.cdf(cdf_age, label='age')
decorate(title='Distribution of age',
xlabel='Age (years)',
ylabel='CDF')
MakeNormalModel(gss.age.dropna(), label='')
decorate(title='Distribution of age',
xlabel='Age (years)',
ylabel='CDF')
MakeNormalPlot(gss.age.dropna(), label='')
decorate(title='Normal probability plot',
xlabel='Standard normal sample',
ylabel='Age (years)')
thinkplot.cdf(cdf_age, label='age', complement=True)
decorate(title='Distribution of age',
xlabel='Age (years)',
ylabel='Complementary CDF, log scale',
yscale='log')
thinkplot.cdf(cdf_age, label='age')
decorate(title='Distribution of age',
xlabel='Age (years)',
ylabel='CDF',
xscale='log')
values = np.log10(gss.age.dropna())
MakeNormalModel(values, label='')
decorate(title='Distribution of age',
xlabel='Age (log10 years)',
ylabel='CDF')
MakeNormalPlot(values, label='')
decorate(title='Distribution of age',
xlabel='Standard normal sample',
ylabel='Age (log10 years)')
thinkplot.cdf(cdf_age, label='age', complement=True)
decorate(title='Distribution of age',
xlabel='Age (years)',
ylabel='Complementary CDF, log scale',
xscale='log',
yscale='log')
thinkplot.cdf(cdf_age, label='age', transform='Weibull')
decorate(title='Distribution of age',
xlabel='Age (years)',
ylabel='log Complementary CDF, log scale',
xscale='log',
yscale='log')
cdf_realinc = Cdf(gss.realinc)
thinkplot.cdf(cdf_realinc, label='income')
decorate(title='Distribution of income',
xlabel='Income (1986 $)',
ylabel='CDF')
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
%time brfss = pd.read_hdf('brfss.hdf5', 'brfss')
brfss.head()
heights = brfss.HTM4
cdf_heights = Cdf(heights)
thinkplot.Cdf(cdf_heights)
decorate(xlabel='Height (cm)', ylabel='CDF')
from scipy.stats import gaussian_kde
kde = gaussian_kde(heights.dropna())
xs = np.linspace(heights.min(), heights.max())
ds = kde.evaluate(xs)
ds /= ds.sum()
plt.plot(xs, ds, label='KDE heights')
decorate(xlabel='Height (cm)', ylabel='PDF')
kde = gaussian_kde(heights.dropna(), bw_method=0.3)
ds = kde.evaluate(xs)
ds /= ds.sum()
plt.plot(xs, ds, label='KDE heights')
decorate(xlabel='Height (cm)', ylabel='PDF')
mean = heights.mean()
std = heights.std()
mean, std
normal_pdf = thinkstats2.NormalPdf(mean, std)
ps = normal_pdf.Density(xs)
ps /= ps.sum()
plt.plot(xs, ps, color='gray', label='Normal model')
plt.plot(xs, ds, label='KDE heights')
decorate(xlabel='Height (cm)', ylabel='PDF')
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
male = (brfss.SEX == 1)
male_weights = brfss.loc[male, 'WTKG3']
female = (brfss.SEX == 2)
female_weights = brfss.loc[female, 'WTKG3']
male_weights.mean(), male_weights.median()
male_weights.skew(), thinkstats2.Skewness(male_weights.dropna())
thinkstats2.PearsonMedianSkewness(male_weights.dropna())
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Here are some of the functions from Chapter 5.
Step4: Read the GSS data again.
Step5: Most variables use special codes to indicate missing data. We have to be careful not to use these codes as numerical data; one way to manage that is to replace them with NaN, which Pandas recognizes as a missing value.
Step6: Distribution of age
Step7: Exercise
Step8: 2) Here's a normal probability plot for the distribution of ages.
Step9: 3) Here's the complementary CDF on a log-y scale.
Step10: 4) Here's the CDF of ages on a log-x scale.
Step11: 5) Here's the CDF of the logarithm of ages, compared to a normal model.
Step12: 6) Here's a normal probability plot for the logarithm of ages.
Step13: 7) Here's the complementary CDF on a log-log scale.
Step14: 8) Here's a test to see whether ages are well-modeled by a Weibull distribution.
Step15: Distribution of income
Step16: Exercise
Step17: 2) Here's a normal probability plot for the values.
Step18: 3) Here's the complementary CDF on a log-y scale.
Step19: 4) Here's the CDF on a log-x scale.
Step20: 5) Here's the CDF of the logarithm of the values, compared to a normal model.
Step21: 6) Here's a normal probability plot for the logarithm of the values.
Step22: 7) Here's the complementary CDF on a log-log scale.
Step23: 8) Here's a test to see whether the values are well-modeled by a Weibull distribution.
Step24: BRFSS
Step25: Let's look at the distribution of height in the BRFSS dataset. Here's the CDF.
Step26: To see whether a normal model describes this data well, we can use KDE to estimate the PDF.
Step27: Here's an example using the default bandwidth method.
Step28: It doesn't work very well; we can improve it by overriding the bandwidth with a constant.
Step29: Now we can generate a normal model with the same mean and standard deviation.
Step30: Here's the model compared to the estimated PDF.
Step31: The data don't fit the model particularly well, possibly because the distribution of heights is a mixture of two distributions, for men and women.
Step32: Exercise
Step33: Exercise
Step34: Skewness
Step35: As we've seen, these distributions are skewed to the right, so we expect the mean to be higher than the median.
Step36: We can compute the moment-based sample skewness using Pandas or thinkstats2. The results are almost the same.
Step37: But moment-based sample skewness is a terrible statistic! A more robust alternative is Pearson's median skewness
Step38: Exercise
|
1,327
|
<ASSISTANT_TASK:>
Python Code:
# create a model and add the parameters.
m = Model()
m.add_parameters("W", (8,2))
m.add_parameters("V", (1,8))
m.add_parameters("b", (8))
renew_cg() # new computation graph. not strictly needed here, but good practice.
# associate the parameters with cg Expressions
W = parameter(m["W"])
V = parameter(m["V"])
b = parameter(m["b"])
#b[1:-1].value()
b.value()
x = vecInput(2) # an input vector of size 2. Also an expression.
output = logistic(V*(tanh((W*x)+b)))
# we can now query our network
x.set([0,0])
output.value()
# we want to be able to define a loss, so we need an input expression to work against.
y = scalarInput(0) # this will hold the correct answer
loss = binary_log_loss(output, y)
x.set([1,0])
y.set(0)
print loss.value()
y.set(1)
print loss.value()
trainer = SimpleSGDTrainer(m)
x.set([1,0])
y.set(1)
loss_value = loss.value() # this performs a forward through the network.
print "the loss before step is:",loss_value
# now do an optimization step
loss.backward() # compute the gradients
trainer.update()
# see how it affected the loss:
loss_value = loss.value(recalculate=True) # recalculate=True means "don't use precomputed value"
print "the loss after step is:",loss_value
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
total_loss = 0
seen_instances = 0
for question, answer in zip(questions, answers):
x.set(question)
y.set(answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
x.set([0,1])
print "0,1",output.value()
x.set([1,0])
print "1,0",output.value()
x.set([0,0])
print "0,0",output.value()
x.set([1,1])
print "1,1",output.value()
W.value()
V.value()
b.value()
# define the parameters
m = Model()
m.add_parameters("W", (8,2))
m.add_parameters("V", (1,8))
m.add_parameters("b", (8))
# renew the computation graph
renew_cg()
# add the parameters to the graph
W = parameter(m["W"])
V = parameter(m["V"])
b = parameter(m["b"])
# create the network
x = vecInput(2) # an input vector of size 2.
output = logistic(V*(tanh((W*x)+b)))
# define the loss with respect to an output y.
y = scalarInput(0) # this will hold the correct answer
loss = binary_log_loss(output, y)
# create training instances
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
# train the network
trainer = SimpleSGDTrainer(m)
total_loss = 0
seen_instances = 0
for question, answer in zip(questions, answers):
x.set(question)
y.set(answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
# create training instances, as before
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
# create a network for the xor problem given input and output
def create_xor_network(model, inputs, expected_answer):
renew_cg()
W = parameter(model["W"])
V = parameter(model["V"])
b = parameter(model["b"])
x = vecInput(len(inputs))
x.set(inputs)
y = scalarInput(expected_answer)
output = logistic(V*(tanh((W*x)+b)))
loss = binary_log_loss(output, y)
return loss
m = Model()
m.add_parameters("W", (8,2))
m.add_parameters("V", (1,8))
m.add_parameters("b", (8))
trainer = SimpleSGDTrainer(m)
seen_instances = 0
total_loss = 0
for question, answer in zip(questions, answers):
loss = create_xor_network(m, question, answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first block creates a model and populates it with parameters.
Step2: Training
Step3: To use the trainer, we need to
Step4: The optimization step indeed made the loss decrease. We now need to run this in a loop.
Step5: We now feed each question / answer pair to the network, and try to minimize the loss.
Step6: Our network is now trained. Let's verify that it indeed learned the xor function
Step7: In case we are curious about the parameter values, we can query them
Step8: To summarize
Step9: Dynamic Networks
|
1,328
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.max_rows = 8
df = pd.read_csv("data/titanic.csv")
df.head()
df['Age'].hist()
df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))
df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')
data = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
data.head()
data['1999':].resample('M').mean().plot(ylim=[0,120])
data['1999':].resample('A').mean().plot(ylim=[0,100])
data['weekday'] = data.index.weekday
data['weekend'] = data['weekday'].isin([5, 6])
data_weekend = data.groupby(['weekend', data.index.hour])['BASCH'].mean().unstack(level=0)
data_weekend.plot()
df
df.index
df.columns
df.dtypes
df.info()
df.values
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
df_countries = pd.DataFrame(data)
df_countries
df['Age']
age = df['Age']
age.index
age.values[:10]
age[0]
df = df.set_index('Name')
df
age = df['Age']
age
age['Dooley, Mr. Patrick']
age * 1000
age.mean()
age[age > 70]
df['Embarked'].value_counts()
# %load snippets/01-pandas_introduction31.py
# %load snippets/01-pandas_introduction32.py
# %load snippets/01-pandas_introduction33.py
#pd.read
#df.to
pd.read_csv?
df = pd.read_csv("data/titanic.csv")
df.head()
# %load snippets/01-pandas_introduction39.py
no2
no2.head(3)
no2.tail()
no2.info()
no2.describe()
no2.plot(kind='box', ylim=[0,250])
no2['BASCH'].plot(kind='hist', bins=50)
# %load snippets/01-pandas_introduction47.py
no2.plot(figsize=(12,6))
no2[-500:].plot(figsize=(12,6))
df = pd.read_csv("data/titanic.csv")
df['Age']
df[['Age', 'Fare']]
df[10:15]
df = df.set_index('Name')
df.loc['Bonnell, Miss. Elizabeth', 'Fare']
df.loc['Bonnell, Miss. Elizabeth':'Andersson, Mr. Anders Johan', :]
df.iloc[0:2,1:3]
df.loc['Braund, Mr. Owen Harris', 'Survived'] = 100
df
df['Fare'] > 50
df[df['Fare'] > 50]
df = pd.read_csv("data/titanic.csv")
# %load snippets/01-pandas_introduction63.py
# %load snippets/01-pandas_introduction64.py
# %load snippets/01-pandas_introduction65.py
# %load snippets/01-pandas_introduction66.py
# %load snippets/01-pandas_introduction67.py
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
df['data'].sum()
for key in ['A', 'B', 'C']:
print(key, df[df['key'] == key]['data'].sum())
df.groupby('key').sum()
df.groupby('key').aggregate(np.sum) # 'sum'
df.groupby('key')['data'].sum()
df = pd.read_csv("data/titanic.csv")
df.head()
# %load snippets/01-pandas_introduction76.py
# %load snippets/01-pandas_introduction77.py
# %load snippets/01-pandas_introduction78.py
# %load snippets/01-pandas_introduction79.py
# %load snippets/01-pandas_introduction80.py
df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10))
# %load snippets/01-pandas_introduction82.py
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
no2.index
no2["2010-01-01 09:00": "2010-01-01 12:00"]
no2['2012-01':'2012-03']
no2.index.hour
no2.index.year
no2.plot()
no2.head()
no2.resample('D').mean().head()
no2.resample('D').max().head()
no2.resample('M').mean().plot() # 'A'
# no2['2012'].resample('D').plot()
# %load snippets/01-pandas_introduction95.py
# %load snippets/01-pandas_introduction96.py
# %load snippets/01-pandas_introduction97.py
# %load snippets/01-pandas_introduction98.py
# %load snippets/01-pandas_introduction99.py
# %load snippets/01-pandas_introduction100.py
no2.index.weekday?
# %load snippets/01-pandas_introduction102.py
# %load snippets/01-pandas_introduction103.py
# %load snippets/01-pandas_introduction104.py
# %load snippets/01-pandas_introduction105.py
# %load snippets/01-pandas_introduction106.py
# %load snippets/01-pandas_introduction107.py
# re-reading the data to have a clean version
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
# %load snippets/01-pandas_introduction109.py
# %load snippets/01-pandas_introduction110.py
# %load snippets/01-pandas_introduction111.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Let's start with a showcase
Step2: Starting from reading this dataset, to answering questions about this data in a few lines of code
Step3: How does the survival rate of the passengers differ between sexes?
Step4: Or how does it differ between the different classes?
Step5: All the needed functionality for the above examples will be explained throughout this tutorial.
Step6: to answering questions about this data in a few lines of code
Step7: What is the difference in diurnal profile between weekdays and weekend?
Step8: We will come back to these example, and build them up step by step.
Step9: Attributes of the DataFrame
Step10: To check the data types of the different columns
Step11: An overview of that information can be given with the info() method
Step12: Also a DataFrame has a values attribute, but attention
Step13: Apart from importing your data from an external source (text file, excel, database, ..), one of the most common ways of creating a dataframe is from a dictionary of arrays or lists.
Step14: One-dimensional data
Step15: Attributes of a Series
Step16: You can access the underlying numpy array representation with the .values attribute
Step17: We can access series values via the index, just like for NumPy arrays
Step18: Unlike the NumPy array, though, this index can be something other than integers
Step19: but with the power of numpy arrays. Many things you can do with numpy arrays, can also be applied on DataFrames / Series.
Step20: A range of methods
Step21: Fancy indexing, like indexing with a list or boolean indexing
Step22: But also a lot of pandas specific methods, e.g.
Step23: <div class="alert alert-success">
Step24: <div class="alert alert-success">
Step25: 3. Data import and export
Step26: Very powerful csv reader
Step27: Luckily, if we have a well formed csv file, we don't need many of those arguments
Step28: <div class="alert alert-success">
Step29: 4. Exploration
Step30: info()
Step31: Getting some basic summary statistics about the data with describe
Step32: Quickly visualizing the data
Step33: <div class="alert alert-success">
Step34: The default plot (when not specifying kind) is a line plot of all columns
Step35: This does not say too much ..
Step36: Or we can use some more advanced time series features -> see further in this notebook!
Step37: df[] provides some convenience shortcuts
Step38: or multiple columns
Step39: But, slicing accesses the rows
Step40: Systematic indexing with loc and iloc
Step41: Selecting by position with iloc works similar as indexing numpy arrays
Step42: The different indexing methods can also be used to assign data
Step43: Boolean indexing (filtering)
Step44: <div class="alert alert-success">
Step45: <div class="alert alert-success">
Step46: 6. The group-by operation
Step47: Recap
Step48: However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.
Step49: This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.
Step50: And many more methods are available.
Step51: Application of the groupby concept on the titanic data
Step52: <div class="alert alert-success">
Step53: <div class="alert alert-success">
Step54: <div class="alert alert-success">
Step55: <div class="alert alert-success">
Step56: <div class="alert alert-success">
Step57: <div class="alert alert-success">
Step58: 7. Working with time series data
Step59: When we ensure the DataFrame has a DatetimeIndex, time-series related functionality becomes available
Step60: Indexing a time series works with strings
Step61: A nice feature is "partial string" indexing, so you don't need to provide the full datetime string.
Step62: Time and date components can be accessed from the index
Step63: Converting your time series with resample
Step64: The time series has a frequency of 1 hour. I want to change this to daily
Step65: Above I take the mean, but as with groupby I can also specify other methods
Step66: The string to specify the new time frequency
Step67: <div class="alert alert-success">
Step68: <div class="alert alert-success">
Step69: Now, we can calculate the mean of each month over the different years
Step70: <div class="alert alert-success">
Step71: <div class="alert alert-success">
Step72: Add a column indicating week/weekend
Step73: Now we can groupby the hour of the day and the weekend (or use pivot_table)
Step74: <div class="alert alert-success">
|
1,329
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import timeit
import cv2, mahotas, skimage, skimage.morphology
from pylab import imshow, show
import matplotlib.pyplot as plt
%matplotlib inline
lena_path = 'pershoot/lena.jpeg'
def pyplots(f1, f2, f3, f4, cmap='gray', scatter=False):
plt.figure(figsize=(10,10))
plt.subplot(1, 4, 1)
plt.imshow(f1, cmap=cmap)
plt.axis('off')
if scatter:
plt.scatter(x=[100, 200], y=[100, 200], c='r', s=40)
for index, im in enumerate([f2, f3, f4]):
plt.subplot(1, 4, index+2)
if cmap is not None:
plt.imshow(im, cmap=cmap)
else:
plt.imshow(im)
plt.axis('off')
fRGB = mahotas.imread(lena_path).astype(np.uint8)
f64 = mahotas.imread(lena_path, as_grey=True)
f = f64.astype(int)
otsu = mahotas.otsu(f.astype(np.uint8))
print(otsu)
fbin = f > otsu
fbin8 = fbin.astype(np.uint8)
Bc = np.eye(3)
Bc = Bc.astype(bool)
Bc8 = Bc.astype(np.uint8)
fMH = mahotas.erode(fbin, Bc)
fSK = skimage.morphology.opening(fbin8, Bc8)
fCV = cv2.erode(fbin8, Bc8)
pyplots(f, fMH, fSK, fCV)
fMH = mahotas.dilate(fbin, Bc)
fSK = skimage.morphology.dilation(fbin8, Bc8)
fCV = cv2.dilate(fbin8, Bc8)
pyplots(f, fMH, fSK, fCV)
fMH = mahotas.open(fbin, Bc)
fSK = skimage.morphology.opening(fbin8, Bc8)
fCV = cv2.morphologyEx(fbin8, cv2.MORPH_OPEN, Bc8)
pyplots(f, fMH, fSK, fCV)
fMH = mahotas.close(fbin, Bc)
fSK = skimage.morphology.closing(fbin8, Bc8)
fCV = cv2.morphologyEx(fbin8, cv2.MORPH_CLOSE, Bc8)
pyplots(f, fMH, fSK, fCV)
fMH = mahotas.sobel(f64, just_filter=True)
fSK = skimage.filters.sobel(f64)
fCV = np.absolute(cv2.Sobel(f64,cv2.CV_64F,1,1, ksize=11))
pyplots(f, fMH, fSK, fCV)
markers = np.zeros_like(f)
markers[90,100] = 1
markers[90,20] = 2
markers[90,60] = 3
markers = markers.astype(np.int32)
#fMH = mahotas.cwatershed(f, markers)
#fSK = skimage.morphology.watershed(f, markers)
#fCV = cv2.watershed(fRGB, markers)
#pyplots(f, fMH, fSK, fCV, cmap='gray', scatter=True)
plt.imshow(f, cmap='gray')
plt.axis("off")
plt.scatter(x=[90, 90, 90],
y=[100, 20, 60],
c='r', s=40)
marker3d = np.zeros_like(fRGB)
marker3d[90,100,:] = [1,1,1]
marker3d[90,20,:] = [2,2,2]
marker3d[90,60,:] = [3,3,3]
marker3d = marker3d.astype(np.int32)
marker = np.zeros_like(f)
marker[90,100] = 1
marker[90,20] = 2
marker[90,60] = 3
marker = marker.astype(np.int32)
fMH = mahotas.cwatershed(fRGB, marker3d)
fMH *= otsu
fMH = fMH.astype(np.int32)
fSK = skimage.morphology.watershed(fRGB, marker3d)
fSK *= otsu
fSK = fSK.astype(np.int32)
fCV = cv2.watershed(fRGB, marker)
fCV += 1
fCV *=otsu
pyplots(fRGB, fMH, fSK, fCV, cmap='jet')
pre ='''
import numpy as np
import cv2, mahotas, skimage
from skimage.feature import greycomatrix, greycoprops
lena_path = 'pershoot/lena.jpeg'
fRGB = mahotas.imread(lena_path, as_grey=False)
f64 = mahotas.imread(lena_path, as_grey=True)
f = f64.astype(int)
marker3d = np.zeros_like(fRGB)
marker3d[90,100,:] = [1,1,1]
marker3d[90,20,:] = [2,2,2]
marker3d[90,60,:] = [3,3,3]
marker3d = marker3d.astype(np.int32)
marker = np.zeros_like(f)
marker[90,100] = 1
marker[90,20] = 2
marker[90,60] = 3
marker = marker.astype(np.int32)
otsu = mahotas.otsu(f.astype(np.uint8))
fbin = f > otsu
fbin8 = fbin.astype(np.uint8)
Bc = np.eye(3)
Bc = Bc.astype(bool)
Bc8 = Bc.astype(np.uint8)
'''
import sys
def t(s):
return timeit.timeit(s, setup=pre, number=10)
tests = [
('erode', [
'mahotas.erode(fbin, Bc)',
'skimage.morphology.opening(fbin8, Bc8)',
'cv2.erode(fbin8, Bc8)',
]),
('dilate', [
'mahotas.dilate(fbin, Bc)',
'skimage.morphology.dilation(fbin8, Bc8)',
'cv2.dilate(fbin8, Bc8)',
]),
('open', [
'mahotas.open(fbin, Bc)',
'skimage.morphology.opening(fbin8, Bc8)',
'cv2.morphologyEx(fbin8, cv2.MORPH_OPEN, Bc8)',
]),
('close', [
'mahotas.close(fbin, Bc)',
'skimage.morphology.closing(fbin8, Bc8)',
'cv2.morphologyEx(fbin8, cv2.MORPH_CLOSE, Bc8)',
]),
('sobel', [
'mahotas.sobel(f)',
'skimage.filters.sobel(f64)',
'cv2.Sobel(f64,cv2.CV_64F,1,1)',
]),
('cwatershed', [
'mahotas.cwatershed(fRGB, marker3d)',
'skimage.morphology.watershed(fRGB, marker3d)',
'cv2.watershed(fRGB, marker)',
]),
('haralick', [
'mahotas.features.haralick(f)',
'greycoprops(greycomatrix(f, [1], [0]))',
None,
]),
]
print(r'%-12s|%9s |%9s |%9s |' % ('Algorithm', 'mahotas','skimage','opencv'))
for name,statements in tests:
sys.stdout.write(r'%-12s|' % name),
for st in statements:
if st is None:
sys.stdout.write(' NA |'),
else:
time = '%.4f' % (t(st))
sys.stdout.write('%8s |' % time),
sys.stdout.write('\n')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: erode
Step2: dilate
Step3: open
Step4: close
Step5: sobel
Step6: watershed
Step7: mahotas vs skimage vs opencv
|
1,330
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
sns.set_context("poster")
sns.set(style="ticks",font="Arial",font_scale=2)
os.getcwd()
os.listdir()
subjects = pd.read_table("Study2_Subjects.csv", sep=",")
subjects.head()
subjects["Age"].hist();
subjects["Sex"]
subjects["Q1"].hist();
subjects["Q2"].hist();
subjects["Q3"].hist();
subjects["Condition"]
#check for missing values
subjects.isnull()
subjects.loc[subjects["ID"]==9, "Condition"] = "C"
subjects = subjects.rename({'Q1':'Anxiety',
'Q2':'Depression',
'Q3':'Stress'}, axis="columns")
subjects["Sex"] = subjects["Sex"].replace({1:"Male",2:"Female"})
subjects["Composite"] = subjects[["Anxiety","Depression","Stress"]].sum(axis="columns")
trials = pd.read_table("Study2_Trials.csv", sep=",")
data = subjects.merge(trials,on="ID")
data.head()
id_cols, value_cols = [], []
for column in data.columns:
if 'RT' in column:
value_cols.append(column)
else:
id_cols.append(column)
data_long = pd.melt(data,id_vars=id_cols,var_name="Trial",value_vars=value_cols,value_name="RT")
data_long.head()
subset = data_long[data_long["Anxiety"] >= 25]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Open and visually inspect the data
Step2: View the first 5 rows of the data frame
Step3: Examine the data for issues. Some things to look for
Step4: Correcting data issues
Step5: The variables Q1, Q2 and Q3 don't have useful names, update them to something useful
Step6: The sex variable is coded in a non-intuitive way, change it to male/female
Step7: Create new Variables
Step8: Combine two datasets together and reshape
Step9: Now that you have a one frame with all the data, put it into long format. Each subject should have one row for each trial.
Step10: You want to do a separate analysis only on the people who had a high score on the anxiety questionnaire. Create a new data frame (in long format) with just the individuals who got 25 or higher
|
1,331
|
<ASSISTANT_TASK:>
Python Code:
import math
import math
x = math.cos(2 * math.pi)
print(x)
from math import cos, pi
x = cos(2 * pi)
print(x)
from math import cos as cosine # Now the `cos` function can be referenced as `cosine`
cosine(pi/2)
from math import *
print("Cosine Function: ", cos(pi))
print("Sin Function: ", sin(pi))
print("Logarithm: ", log(e))
print("Power function: ", pow(3, 3))
import math
print(dir(math))
help(math.log)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This includes the whole module and makes it available for use later in the program.
Step2: Alternatively, we can chose to import all symbols (functions and variables) in a module to the current namespace (so that we don't need to use the prefix "math." every time we use something from the math module
Step3: This is called selective Import
Step4: Finally, if we want to import everything from a module, we may the * character
Step5: Looking at what a module contains, and its documentation
Step6: And using the function help we can get a description of each function (almost .. not all functions have docstrings, as they are technically called, but the vast majority of functions are documented this way).
|
1,332
|
<ASSISTANT_TASK:>
Python Code:
# Black linter, optional
%load_ext lab_black
import pandas as pd
import numpy as np
import chainladder as cl
import matplotlib.pyplot as plt
import os
%matplotlib inline
print("pandas: " + pd.__version__)
print("numpy: " + np.__version__)
print("chainladder: " + cl.__version__)
genins = cl.load_sample("genins")
genins_dev = cl.Pipeline(
[("dev", cl.Development()), ("tail", cl.TailCurve())]
).fit_transform(genins)
genins_model = cl.Chainladder().fit(genins_dev)
genins_model.ultimate_
genins_model.ibnr_
genins
genins_model.full_triangle_
genins_model.full_triangle_.dev_to_val()
genins_model.full_triangle_.valuation_date
genins_model.full_triangle_.dev_to_val().cum_to_incr()
genins_model.full_expectation_
genins_model.full_triangle_ - genins_model.full_expectation_
genins_model.full_triangle_[
genins_model.full_triangle_.valuation <= genins.valuation_date
] - genins_model.full_expectation_[
genins_model.full_triangle_.valuation <= genins.valuation_date
]
genins_AvE = genins - genins_model.full_expectation_
genins_AvE[genins_AvE.valuation <= genins.valuation_date]
genins_AvE[genins_AvE.valuation <= genins.valuation_date].heatmap()
cal_yr_ibnr = genins_model.full_triangle_.dev_to_val().cum_to_incr()
cal_yr_ibnr[cal_yr_ibnr.valuation.year == 2011]
bf_model.fit(
comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
bf_model.ultimate_
bf_model.fit(
comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
b1 = cl.BornhuetterFerguson(apriori=0.75).fit(
comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
b2 = cl.BornhuetterFerguson(apriori=1.00).fit(
comauto["CumPaidLoss"],
sample_weight=0.75 * comauto["EarnedPremNet"].latest_diagonal,
)
b1.ultimate_ == b2.ultimate_
cl_model = cl.Chainladder().fit(comauto["CumPaidLoss"])
plt.plot(
bf_model.ultimate_.to_frame().index.year, bf_model.ultimate_.to_frame(), label="BF",
)
plt.plot(
cl_model.ultimate_.to_frame().index.year, cl_model.ultimate_.to_frame(), label="CL",
)
plt.legend(loc="upper left")
bk_model = cl.Benktander(apriori=0.75, n_iters=2)
bk_model.fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
plt.plot(
bf_model.ultimate_.to_frame().index.year, bf_model.ultimate_.to_frame(), label="BF"
)
plt.plot(
cl_model.ultimate_.to_frame().index.year, cl_model.ultimate_.to_frame(), label="CL"
)
plt.plot(
bk_model.ultimate_.to_frame().index.year, bk_model.ultimate_.to_frame(), label="BK"
)
plt.legend(loc="upper left")
cc_model = cl.CapeCod(decay=1, trend=0).fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
cc_model.apriori_
latest_diagonal = comauto["CumPaidLoss"].latest_diagonal
cdf_as_origin_vector = (
cl.Chainladder().fit(comauto["CumPaidLoss"]).ultimate_
/ comauto["CumPaidLoss"].latest_diagonal
)
latest_diagonal.sum() / (
comauto["EarnedPremNet"].latest_diagonal / cdf_as_origin_vector
).sum()
cc_model = cl.CapeCod(decay=0, trend=0).fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
cc_model.apriori_
latest_diagonal / (comauto["EarnedPremNet"].latest_diagonal / cdf_as_origin_vector)
cc_model.ultimate_ - cl_model.ultimate_
plt.plot(cc_model.apriori_.to_frame().index.year, cc_model.apriori_.to_frame())
trended_cc_model = cl.CapeCod(decay=0, trend=0.01).fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
plt.plot(
cc_model.apriori_.to_frame().index.year,
cc_model.apriori_.to_frame(),
label="Untrended",
)
plt.plot(
trended_cc_model.apriori_.to_frame().index.year,
trended_cc_model.apriori_.to_frame(),
label="Trended",
)
plt.legend(loc="lower right")
trended_cc_model = cl.CapeCod(decay=0, trend=0.01).fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
trended_decayed_cc_model = cl.CapeCod(decay=0.75, trend=0.01).fit(
X=comauto["CumPaidLoss"], sample_weight=comauto["EarnedPremNet"].latest_diagonal
)
plt.plot(
cc_model.apriori_.to_frame().index.year,
cc_model.apriori_.to_frame(),
label="Untrended",
)
plt.plot(
trended_cc_model.apriori_.to_frame().index.year,
trended_cc_model.apriori_.to_frame(),
label="Trended",
)
plt.plot(
trended_decayed_cc_model.apriori_.to_frame().index.year,
trended_decayed_cc_model.apriori_.to_frame(),
label="Trended and Decayed",
)
plt.legend(loc="lower right")
plt.plot(
trended_cc_model.apriori_.to_frame().index.year,
trended_cc_model.apriori_.to_frame(),
label="Trended",
)
plt.plot(
trended_cc_model.detrended_apriori_.to_frame().index.year,
trended_cc_model.detrended_apriori_.to_frame(),
label="Detended to Original",
)
plt.legend(loc="lower right")
bf_model = cl.BornhuetterFerguson().fit(
X=comauto["CumPaidLoss"],
sample_weight=trended_cc_model.detrended_apriori_
* comauto["EarnedPremNet"].latest_diagonal,
)
bf_model.ultimate_.sum() - trended_cc_model.ultimate_.sum()
wkcomp = (
cl.load_sample("clrd")
.groupby("LOB")
.sum()
.loc["wkcomp"][["CumPaidLoss", "EarnedPremNet"]]
)
wkcomp
patterns = cl.Pipeline(
[
(
"dev",
cl.Development(
average=["volume"] * 5 + ["simple"] * 4,
n_periods=7,
drop_valuation="1995",
),
),
("tail", cl.TailCurve(curve="inverse_power", extrap_periods=80)),
]
)
cc = cl.CapeCod(decay=0.8, trend=0.02).fit(
X=patterns.fit_transform(wkcomp["CumPaidLoss"]),
sample_weight=wkcomp["EarnedPremNet"].latest_diagonal,
)
cc.ultimate_
plt.bar(cc.ultimate_.to_frame().index.year, cc.ultimate_.to_frame()["2261"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Chainladder Method
Step2: We can now use the basic Chainladder estimator to estimate ultimate_ values of our Triangle.
Step3: We can also view the ibnr_. Techincally the term IBNR is reserved for Incurred but not Reported, but the chainladder models use it to describe the difference between the ultimate and the latest evaluation period.
Step4: It is often useful to see the completed Triangle and this can be accomplished by inspecting the full_triangle_. As with most other estimator properties, the full_triangle_ is itself a Triangle and can be manipulated as such.
Step5: Notice the calendar year of our ultimates. While ultimates will generally be realized before this date, the chainladder package picks the highest allowable date available for its ultimate_ valuation.
Step6: We can further manipulate the "triangle", such as applying cum_to_incr().
Step7: Another useful property is full_expectation_. Similar to the full_triangle, it "squares" the Triangle, but replaces the known data with expected values implied by the model and development pattern.
Step8: With some clever arithmetic, we can use these objects to give us other useful information. For example, we can retrospectively review the actual Triangle against its modeled expectation.
Step9: We can also filter out the lower right part of the triangle with [genins_model.full_triangle_.valuation <= genins.valuation_date].
Step10: Getting comfortable with manipulating Triangles will greatly improve our ability to extract value out of the chainladder package. Here is another way of getting the same answer.
Step11: We can also filter out the lower right part of the triangle with [genins_model.full_triangle_.valuation <= genins.valuation_date] before applying the heatmap().
Step12: Can you figure out how to get the expected IBNR runoff in the upcoming year?
Step13: The Bornhuetter-Ferguson Method
Step14: Having an apriori that takes on only a constant for all origins can be limiting; however, we can apply a varying vector on the exposure vector to get a varying apriori.
Step15: Having an apriori that takes on only a constant for all origins can be limiting. This shouldn't stop the practitioner from exploiting the fact that the apriori can be embedded directly in the exposure vector itself allowing full cusomization of the apriori.
Step16: If we need to create a new colume, such as AdjEarnedPrmNet with varying implied loss ratios. It is recommend that we perform any data modification in pandas instead of Triangle forms.
Step17: The Benktander Method
Step18: Fitting the Benktander method looks identical to the other methods.
Step19: The Cape Cod Method
Step20: When we fit a CapeCod method, we can see the apriori it computes with the given decay and trend assumptions. Since it is an array of estimated parameters, this CapeCod attribute is called the apriori_, with a trailing underscore.
Step21: With decay=1, each origin period gets the same apriori_ (this is the traditional Cape Cod). The apriori_ is calculated using the latest diagonal over the used-up exposure, where the used-up exposure is the exposure vector / CDF. Let's validate the calculation of the a priori.
Step22: With decay=0, the apriori_ for each origin period stands on its own.
Step23: Doing the same on our manually calculated apriori_ yields the same result.
Step24: Let's verify the result of this Cape Cod model's result with the Chainladder's.
Step25: We can examine the apriori_s to see whether there exhibit any trends over time.
Step26: Looks like there is a small positive trend, let's judgementally select the trend as 1%.
Step27: We can of course utilize both the trend and the decay parameters together. Adding trend to the CapeCod method is intended to adjust the apriori_s to a common level. Once at a common level, the apriori_ can be estimated from multiple origin periods using the decay factor.
Step28: Once estimated, it is necessary to detrend our apriori_s back to their untrended levels and these are contained in detrended_apriori_. It is the detrended_apriori_ that gets used in the calculation of ultimate_ losses.
Step29: The detrended_apriori_ is a much smoother estimate of the initial expected ultimate_. With the detrended_apriori_ in hand, the CapeCod method estimator behaves exactly like our the BornhuetterFerguson model.
Step30: Recap
Step31: Let's calculate the age-to-age factors
|
1,333
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import copy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sys
import os
import re
from keras.datasets import imdb, reuters
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.optimizers import SGD, RMSprop
from keras.utils import np_utils
from keras.layers.convolutional import Convolution1D, MaxPooling1D, ZeroPadding1D, AveragePooling1D
from keras.callbacks import EarlyStopping
from keras.layers.normalization import BatchNormalization
from keras.preprocessing import sequence
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import SimpleRNN, LSTM, GRU
from keras.layers.wrappers import TimeDistributed
from keras.preprocessing.text import Tokenizer
path = "../../../class_data/aclImdb/"
ff = [path + "train/pos/" + x for x in os.listdir(path + "train/pos")] + \
[path + "train/neg/" + x for x in os.listdir(path + "train/neg")] + \
[path + "test/pos/" + x for x in os.listdir(path + "test/pos")] + \
[path + "test/neg/" + x for x in os.listdir(path + "test/neg")]
TAG_RE = re.compile(r'<[^>]+>')
def remove_tags(text):
return TAG_RE.sub('', text)
input_label = ([1] * 12500 + [0] * 12500) * 2
input_text = []
for f in ff:
with open(f) as fin:
pass
input_text += [remove_tags(" ".join(fin.readlines()))]
num_words = 2000
tok = Tokenizer(num_words)
tok.fit_on_texts(input_text[:25000])
words = []
for iter in range(num_words):
words += [key for key,value in tok.word_index.items() if value==iter+1]
words[:10]
X_train = tok.texts_to_sequences(input_text[:25000])
X_test = tok.texts_to_sequences(input_text[25000:])
y_train = input_label[:25000]
y_test = input_label[25000:]
X_train = sequence.pad_sequences(X_train, maxlen=100)
X_test = sequence.pad_sequences(X_test, maxlen=100)
def reconstruct_text(index, words):
text = []
for ind in index:
if ind != 0:
text += [words[ind-1]]
else:
text += [""]
return text
print(input_text[100])
print("\n\n")
print(reconstruct_text(X_train[100][:40], words))
model = Sequential()
model.add(Embedding(num_words, 32, input_length=100))
model.add(Dropout(0.25))
model.add(SimpleRNN(16, return_sequences=False))
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.layers[2].get_weights()[0].shape) # W - input weights
print(model.layers[2].get_weights()[1].shape) # U - recurrent weights
print(model.layers[2].get_weights()[2].shape) # b - bias
model.fit(X_train, y_train, batch_size=32, nb_epoch=10, verbose=1,
validation_data=(X_test, y_test))
model = Sequential()
model.add(Embedding(num_words, 50))
model.add(Dropout(0.25))
model.add(LSTM(32))
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.layers[2].get_weights()[0].shape) # W_i input gate weights
print(model.layers[2].get_weights()[1].shape) # U_i
print(model.layers[2].get_weights()[2].shape) # b_i
print(model.layers[2].get_weights()[3].shape) # W_f forget weights
print(model.layers[2].get_weights()[4].shape) # U_f
print(model.layers[2].get_weights()[5].shape) # b_f
print(model.layers[2].get_weights()[6].shape) # W_c cell weights
print(model.layers[2].get_weights()[7].shape) # U_c
print(model.layers[2].get_weights()[8].shape) # b_c
print(model.layers[2].get_weights()[9].shape) # W_o output weights
print(model.layers[2].get_weights()[10].shape) # U_o
print(model.layers[2].get_weights()[11].shape) # b_o
model.fit(X_train, y_train, batch_size=1, nb_epoch=10, verbose=1,
validation_data=(X_test, y_test))
model = Sequential()
model.add(Embedding(num_words, 32, input_length=100))
model.add(Dropout(0.25))
model.add(GRU(32,activation='relu'))
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.layers[2].get_weights()[0].shape) # W_z update weights
print(model.layers[2].get_weights()[1].shape) # U_z
print(model.layers[2].get_weights()[2].shape) # b_z
print(model.layers[2].get_weights()[3].shape) # W_r reset weights
print(model.layers[2].get_weights()[4].shape) # U_r
print(model.layers[2].get_weights()[5].shape) # b_r
print(model.layers[2].get_weights()[6].shape) # W_h output weights
print(model.layers[2].get_weights()[7].shape) # U_h
print(model.layers[2].get_weights()[8].shape) # b_h
model.fit(X_train, y_train, batch_size=32, nb_epoch=20, verbose=1,
validation_data=(X_test, y_test))
model = Sequential()
model.add(Embedding(num_words, 32, input_length=100))
model.add(Dropout(0.25))
model.add(SimpleRNN(16, return_sequences=False))
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, nb_epoch=5, verbose=1,
validation_data=(X_test, y_test))
model2 = Sequential()
model2.add(Embedding(num_words, 32, input_length=100))
model2.add(Dropout(0.25))
model2.add(SimpleRNN(16, return_sequences=True))
model2.add(TimeDistributed(Dense(256)))
model2.add(Dropout(0.25))
model2.add(Activation('relu'))
model2.add(TimeDistributed(Dense(1)))
model2.add(Activation('sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model2.set_weights(model.get_weights())
y_hat2 = model2.predict(X_train)
y_hat2.shape
ind = 100
tokens = reconstruct_text(X_train[ind], words)
print(input_text[ind])
plt.figure(figsize=(16, 10))
plt.plot(y_hat2[iter],alpha=0.5)
for i in range(len(tokens)):
plt.text(i,0.5,tokens[i],rotation=90)
ind = 22000
tokens = reconstruct_text(X_train[ind], words)
print(input_text[ind])
plt.figure(figsize=(16, 10))
plt.plot(y_hat2[ind],alpha=0.5)
for i in range(len(tokens)):
plt.text(i,0.5,tokens[i],rotation=90)
ind = 10000
tokens = reconstruct_text(X_train[ind], words)
print(input_text[ind])
plt.figure(figsize=(16, 10))
plt.plot(y_hat2[ind],alpha=0.5)
for i in range(len(tokens)):
plt.text(i,0.5,tokens[i],rotation=90)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data.
Step2: Next, we construct a tokenizer object, initialized with the number of total terms we want. I then use the training data to find the top most used words.
Step3: The tokenizer makes getting the words themeselves out oddly difficult, but this will do it for us
Step4: We can now use the tokenizer to construct data matricies that look like the ones pre-supplied by keras.
Step5: To reconstruct the text, which will have any words not in our vocabulary removed, we can use this function
Step6: Notice that much of the original context is gone given our aggressive filtering, but the main tone (in this case, at least) remains. We would probably want to filter out a few things, like numbers, if we were being more careful.
Step7: I think it is incredibly important to make sure the shape of the weights and bias make sense to you. If they do, you probably understand a large part of what is going on.
Step8: Fitting the model works exactly the same as with CNNs or dense neural networks.
Step9: III. LSTM
Step10: The weights in the LSTM layer are quite a bit more complex, with four triples of W, U, and b. All three have the same dimension, however.
Step11: We'll train the model the same as with the SimpleRNN, but the computational time will be significantly higher. The algorithm needs to backpropagate the complex mechanism inside of the LSTM unit through the entire time series, so this does not seem too surprising.
Step12: IV. GRU
Step13: GRU's have one fewer sets of weights (W,U,b).
Step14: V. Evaluating a sequence of inputs
Step15: Now that we've trained on the final output, we want to take the same weights as before, but to make SimpleRNN return the entire sequence. The output layer will then return a result of size 100, rather than size 1; this is the result of the algorithm after seeing just the first k terms. The last value will be the same as using model.
Step16: Notice that the dimensions of the weights are exactly the same; the input sizes are larger, but with weight sharing we can use the same weight matricies. This is akin to the OverFeat paper for CNNs where a convolution is applied to a larger image; the output's dimensions just increase.
Step17: VI. Visualize the output
|
1,334
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\t'):
Split a string into a list of words, removing punctuation and stop words.
# the following functions were influenced by Dr. Granger's advice in office hours
# removes punctuation and stop words in the lists
def remove(c):
if c in punctuation:
return False
else:
return True
def remove_stop(f):
if stop_words==None:
pass
elif f in stop_words:
return False
else:
return True
a1=s.replace('--',' ').replace('-',' ') #got word count problems with single and double dashes, so this replaces them with spaces
new_list=a1.splitlines()
d=[w for m in new_list for w in m.split()] #referenced http://goo.gl/YWCeAS
r=[]
for x in d: # got help from James A.
werd=[]
for char in x:
werd=werd+list(filter(remove,char))
r.append(''.join(werd))
low=[g.lower() for g in r]
k=[]
if type(stop_words)==list:
k=list(filter(remove_stop,low))
elif type(stop_words)==str:
stop_words=stop_words.split()
k=list(filter(remove_stop,low))
else:
k=low
return k
assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) == \
['this', 'way', 'that', 'things', 'will', 'end']
wasteland =
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
assert tokenize(wasteland, stop_words='is the of and') == \
['april','cruellest','month','breeding','lilacs','out','dead','land',
'mixing','memory','desire','stirring','dull','roots','with','spring',
'rain']
def count_words(data):
Return a word count dictionary from the list of words in data.
d={}
for i in range(len(data)):
d[data[i]]=data.count(data[i])
# this creates a dictionary similar to the method used in codecademy
return d
assert count_words(tokenize('this and the this from and a a a')) == \
{'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}
def sort_word_counts(wc):
Return a list of 2-tuples of (word, count), sorted by count descending.
def key_thing(m):
return m[1] # to make sorted work, I needed to create a function that would return the value of the key
tuple_list=list(wc.items())
return sorted(tuple_list, key=key_thing, reverse=True)
assert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \
[('a', 4), ('this', 3), ('and', 2), ('the', 1)]
f=open('mobydick_chapter1.txt','r')
moby_file=f.read()
swc=sort_word_counts(count_words(tokenize(moby_file, stop_words=['the','of','and','a','to','in','is','it','that','as'])))
len(swc)
# In this cell, I am only able to find 844 unique words in the chapter.
# However, I do know the answer is 848. I have searched through the entire text
# file and my output to determine what words are missing or formatted
# incorrectly, but couldn't find any discrepancies. I know I am not the only
# with this problem, as indicated by the gitter chat.
assert swc[0]==('i',43)
assert len(swc)==848
plt.figure(figsize=(6,9))
q=np.empty(51)
e=[]
for i in range(51):
q[i]=swc[i][1]
q2=sorted(q,reverse=False)
for i in range(51):
e.append(swc[i][0])
plt.scatter(q2,range(51),marker='o'),plt.ylim(top=51,bottom=-1)
plt.xlabel('Occurances of Word'),plt.xlim(left=5)
plt.ylabel('Words')
plt.yticks(range(50,-1,-1),e)
plt.title('Words of Moby Dick Chapter 1')
plt.tick_params(axis='x',top='off')
plt.tick_params(axis='y',right='off')
assert True # use this for grading the dotplot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Word counting
Step5: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
Step7: Write a function sort_word_counts that return a list of sorted word counts
Step8: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt
Step9: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
|
1,335
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
1,336
|
<ASSISTANT_TASK:>
Python Code:
# Import the necessary packages
%matplotlib inline
from SimPEG.utils import download
from geoscilabs.seismic.syntheticSeismogram import InteractLogs, InteractDtoT, InteractWconvR, InteractSeismogram
from geoscilabs.seismic.NMOwidget import ViewWiggle, InteractClean, InteractNosiy, NMOstackthree
# from geoscilabs.seismic.drawGaussianSeismogram import *
LogVal = InteractLogs(v1=1500,v2=1500,v3=1500)
LogVal
InteractDtoT(LogVal)
InteractWconvR()
InteractSeismogram()
# Define path to required data files
synDataFilePath = 'https://github.com/geoscixyz/geosci-labs/raw/main/assets/seismic/syndata1.npy'
obsDataFilePath = 'https://github.com/geoscixyz/geosci-labs/raw/main/assets/seismic/obsdata1.npy'
timeFilePath= 'https://github.com/geoscixyz/geosci-labs/raw/main/assets/seismic/time1.npy'
# Download the data
synData = download(synDataFilePath,overwrite=True,verbose=False)
obsData = download(obsDataFilePath,overwrite=True,verbose=False)
timeData = download(timeFilePath,overwrite=True,verbose=False)
# Plot the data
ViewWiggle(synData, obsData)
# Fit hyperbola to clean data
clean = InteractClean(synData,timeData)
clean
noisy = InteractNosiy(obsData, timeData)
noisy
NMOstackthree(obsData, noisy.kwargs["t0"], noisy.kwargs["v"]-200., noisy.kwargs["v"], noisy.kwargs["v"]+200., timeData)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 Normal Incidence Seismogram
Step2: 1.2 Depth to Time conversion
Step3: 1.3 Constructing the ideal seismogram
Step4: 1.4 Vertical Resolution
Step5: Constructing a Normal Incidence Seismic Trace from CMP gather data
Step6: Step 2
Step7: Step 3
Step8: In the previous step, you chose an intercept time (t0) and a stacking velocity (v). Running below cell will generate trhee stacked traces
|
1,337
|
<ASSISTANT_TASK:>
Python Code:
# Installing Theano
# pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git
# Installing Tensorflow
# pip install tensorflow
# Installing Keras
# pip install --upgrade keras
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('./Artificial_Neural_Networks/Churn_Modelling.csv')
X = dataset.iloc[:, 3:13].values
y = dataset.iloc[:, 13].values
print (X.shape)
X
print (y.shape)
y
# Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
onehotencoder = OneHotEncoder(categorical_features = [1])
X = onehotencoder.fit_transform(X).toarray()
X = X[:, 1:]
print (X.shape)
X
print (y.shape)
y
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
# Initialising the ANN
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
# Adding the second hidden layer
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
# Adding the output layer
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(X_train, y_train, batch_size = 10, epochs = 100)
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
cm
# Evaluating the ANN
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense
def build_classifier():
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier, batch_size = 10, epochs = 100)
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10, n_jobs = -1)
mean = accuracies.mean()
variance = accuracies.std()
# Improving the ANN
# Dropout Regularization to reduce overfitting if needed
# Tuning the ANN
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
def build_classifier(optimizer):
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
# classifier.add(Dropout(p = 0.1))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
# classifier.add(Dropout(p = 0.1))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = optimizer, loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier)
parameters = {'batch_size': [25, 32],
'epochs': [100, 500],
'optimizer': ['adam', 'rmsprop']}
grid_search = GridSearchCV(estimator = classifier,
param_grid = parameters,
scoring = 'accuracy',
cv = 10)
grid_search = grid_search.fit(X_train, y_train)
best_parameters = grid_search.best_params_
best_accuracy = grid_search.best_score_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Preprocessing
Step2: y (actual value)
Step3: Building an ANN
Step4: Making predictions and evaluating the model
Step5: Evaluating, Improving and Tuning the ANN
|
1,338
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from collections import Counter, defaultdict
import math
import nltk
from nltk.tree import Tree
class Symbol:
A symbol in a grammar.
This class will be used as parent class for Terminal, Nonterminal.
This way both will be a type of Symbol.
def __init__(self):
pass
class Terminal(Symbol):
Terminal symbols are words in a vocabulary
E.g. 'I', 'ate', 'salad', 'the'
def __init__(self, symbol: str):
assert type(symbol) is str, 'A Terminal takes a python string, got %s' % type(symbol)
self._symbol = symbol
def is_terminal(self):
return True
def is_nonterminal(self):
return False
def __str__(self):
return "'%s'" % self._symbol
def __repr__(self):
return 'Terminal(%r)' % self._symbol
def __hash__(self):
return hash(self._symbol)
def __len__(self):
The length of the underlying python string
return len(self._symbol)
def __eq__(self, other):
return type(self) == type(other) and self._symbol == other._symbol
def __ne__(self, other):
return not (self == other)
@property
def obj(self):
Returns the underlying python string
return self._symbol
class Nonterminal(Symbol):
Nonterminal symbols are the grammatical classes in a grammar.
E.g. S, NP, VP, N, Det, etc.
def __init__(self, symbol: str):
assert type(symbol) is str, 'A Nonterminal takes a python string, got %s' % type(symbol)
self._symbol = symbol
def is_terminal(self):
return False
def is_nonterminal(self):
return True
def __str__(self):
return "[%s]" % self._symbol
def __repr__(self):
return 'Nonterminal(%r)' % self._symbol
def __hash__(self):
return hash(self._symbol)
def __len__(self):
The length of the underlying python string
return len(self._symbol)
def __eq__(self, other):
return type(self) == type(other) and self._symbol == other._symbol
def __ne__(self, other):
return not (self == other)
@property
def obj(self):
Returns the underlying python string
return self._symbol
dog = Terminal('dog')
the = Terminal('the')
walks = Terminal('walks')
S = Nonterminal('S')
NP = Nonterminal('NP')
NP_prime = Nonterminal('NP')
VP = Nonterminal('VP')
V = Nonterminal('V')
N = Nonterminal('N')
Det = Nonterminal('Det')
print(dog)
print(NP)
print()
print(NP==Det)
print(NP!=Det)
print(NP==NP)
print(NP==NP_prime)
dog
dog.is_terminal()
NP.is_terminal()
d = {NP: 1, S: 2}
d
class Rule:
def __init__(self, lhs, rhs, prob):
Constructs a Rule.
A Rule takes a LHS symbol and a list/tuple of RHS symbols.
:param lhs: the LHS nonterminal
:param rhs: a sequence of RHS symbols (terminal or nonterminal)
:param prob: probability of the rule
assert isinstance(lhs, Symbol), 'LHS must be an instance of Symbol'
assert len(rhs) > 0, 'If you want an empty RHS, use an epsilon Terminal EPS'
assert all(isinstance(s, Symbol) for s in rhs), 'RHS must be a sequence of Symbol objects'
self._lhs = lhs
self._rhs = tuple(rhs)
self._prob = prob
def __eq__(self, other):
return self._lhs == other._lhs and self._rhs == other._rhs and self._prob == other._prob
def __ne__(self, other):
return not (self == other)
def __hash__(self):
return hash((self._lhs, self._rhs, self._prob))
def __repr__(self):
return '%s -> %s (%s)' % (self._lhs,
' '.join(str(sym) for sym in self._rhs),
self.prob)
def is_binary(self):
True if Rule is binary: A -> B C
return len(self._rhs) == 2
def is_unary(self):
True if Rule is unary: A -> w
return len(self._rhs) == 1
@property
def lhs(self):
Returns the lhs of the rule
return self._lhs
@property
def rhs(self):
Returns the rhs of the rule
return self._rhs
@property
def prob(self):
Returns the probability of the rule
return self._prob
r1 = Rule(S, [NP, VP], 1.0)
r2 = Rule(NP, [Det, N], 1.0)
r3 = Rule(N, [dog], 1.0)
r4 = Rule(Det, [the], 1.0)
r5 = Rule(VP, [walks], 1.0)
print(r1)
print(r2)
print(r3)
print(r4)
print(r1.prob)
r1 in set([r1])
d = {r1: 1, r2: 2}
d
class PCFG(object):
Constructs a PCFG.
A PCFG stores a list of rules that can be accessed in various ways.
:param rules: an optional list of rules to initialize the grammar with
def __init__(self, rules=[]):
self._rules = []
self._rules_by_lhs = defaultdict(list)
self._terminals = set()
self._nonterminals = set()
for rule in rules:
self.add(rule)
def add(self, rule):
Adds a rule to the grammar
if not rule in self._rules:
self._rules.append(rule)
self._rules_by_lhs[rule.lhs].append(rule)
self._nonterminals.add(rule.lhs)
for s in rule.rhs:
if s.is_terminal():
self._terminals.add(s)
else:
self._nonterminals.add(s)
def update(self, rules):
Add a list of rules to the grammar
for rule in rules:
self.add(rule)
@property
def nonterminals(self):
The list of nonterminal symbols in the grammar
return self._nonterminals
@property
def terminals(self):
The list of terminal symbols in the grammar
return self._terminals
@property
def rules(self):
The list of rules in the grammar
return self._rules
@property
def binary_rules(self):
The list of binary rules in the grammar
return [rule for rule in self._rules if rule.is_binary()]
@property
def unary_rules(self):
The list of unary rules in the grammar
return [rule for rule in self._rules if rule.is_unary()]
def __len__(self):
return len(self._rules)
def __getitem__(self, lhs):
return self._rules_by_lhs.get(lhs, frozenset())
def get(self, lhs, default=frozenset()):
The list of rules whose LHS is the given symbol lhs
return self._rules_by_lhs.get(lhs, frozenset())
def __iter__(self):
Iterator over rules (in arbitrary order)
return iter(self._rules)
def iteritems(self):
Iterator over pairs of the kind (LHS, rules rewriting LHS)
return self._rules_by_lhs.items()
def __str__(self):
Prints the grammar line by line
lines = []
for lhs, rules in self.iteritems():
for rule in rules:
lines.append(str(rule))
return '\n'.join(lines)
G = PCFG()
G.add(r1)
G.update([r2,r3,r4,r5])
print(G)
G.get(S)
G.get(NP)
sum(1 for r in G)
len(G)
print(G.nonterminals)
print(G.terminals)
S in G.nonterminals
dog in G.terminals
G.unary_rules
G.binary_rules
parse1 = "(S (NP I) (VP (VP (V shot) (NP (Det an) (N elephant))) (PP (P in) (NP (Det my) (N pajamas)))))"
parse2 = "(S (NP I) (VP (V shot) (NP (Det an) (NP (N elephant) (PP (P in) (NP (Det my) (N pajamas)))))))"
pajamas1 = Tree.fromstring(parse1)
pajamas2 = Tree.fromstring(parse2)
pajamas1.pretty_print()
pajamas2.pretty_print()
# Turn the sentence into a list
sentence = "I shot an elephant in my pajamas".split()
# The length of the sentence
num_words = len(sentence)
def read_grammar_rules(istream):
Reads grammar rules formatted as 'LHS ||| RHS ||| PROB'.
for line in istream:
line = line.strip()
if not line:
continue
fields = line.split('|||')
if len(fields) != 3:
raise ValueError('I expected 3 fields: %s', fields)
lhs = fields[0].strip()
if lhs[0] == '[':
lhs = Nonterminal(lhs[1:-1])
else:
lhs = Terminal(lhs)
rhs = fields[1].strip().split()
new_rhs = []
for r in rhs:
if r[0] == '[':
r = Nonterminal(r[1:-1])
else:
r = Terminal(r)
new_rhs.append(r)
prob = float(fields[2].strip())
yield Rule(lhs, new_rhs, prob)
# Read in the grammar
istream = open('groucho-grammar-1.txt')
grammar = PCFG(read_grammar_rules(istream))
print("The grammar:\n", grammar, "\n")
num_nonterminals = len(grammar.nonterminals)
# Make a nonterminal2index and a index2nonterminal dictionary
n2i = defaultdict(lambda: len(n2i))
i2n = dict()
for A in grammar.nonterminals:
i2n[n2i[A]] = A
# Stop defaultdict behavior of n2i
n2i = dict(n2i)
n2i
# A numpy array zeros
score = np.zeros((num_nonterminals,
num_words + 1,
num_words + 1))
# A numpy array that can store arbitrary data (we set dtype to object)
back = np.zeros((num_nonterminals,
num_words + 1,
num_words + 1), dtype=object)
# Illustration of the backpointer array
back[n2i[S]][0][-1] = (2,NP,VP)
back
def cky(sentence, grammar, n2i):
The CKY algorithm.
Follow the pseudocode from the slides (or J&M).
:param sentence: a list of words
:param grammar: an instance of the class PCFG
:param n2i: a dictionary mapping from Nonterminals to indices
:return score: the filled in scores chart
:return back: the filled in backpointers chart
num_words = len(sentence)
num_nonterminals = len(grammar.nonterminals)
# A numpy array to store the scores of intermediate parses
score = np.zeros((num_nonterminals,
num_words + 1,
num_words + 1))
# A numpy array to store the backpointers
back = np.zeros((num_nonterminals,
num_words + 1,
num_words + 1), dtype=object)
# YOUR CODE HERE
raise NotImplementedError
return score, back
# Run CKY
score, back = cky(sentence, grammar, n2i)
### Don't change the code in this cell. ###
S = Nonterminal('S')
print('The whole slice for nonterminal S:')
print(score[n2i[S]], "\n")
print('The score in cell (S, 0, num_words), which is the probability of the best parse:')
print(score[n2i[S]][0][num_words], "\n")
print('The backpointer in cell (S, 0, num_words):')
print(back[n2i[S]][0][num_words], "\n")
class Span(Symbol):
A Span indicates that symbol was recognized between begin and end.
Example:
Span(Terminal('the'), 0, 1)
This means: we found 'the' in the sentence between 0 and 1
Span(Nonterminal('NP'), 4, 8) represents NP:4-8
This means: we found an NP that covers the part of the sentence between 4 and 8
Thus, Span holds a Terminal or a Nonterminal and wraps it between two integers.
This makes it possible to distinguish between two instances of the same rule in the derrivation.
Example:
We can find that the rule NP -> Det N is use twice in the parse derrivation. But that in the first
case it spans "an elephant" and in the second case it spans "my pajamas". We want to distinguis these.
So: "an elephant" is covered by [NP]:2-4 -> [Det]:2-3 [N]:3-4
"my pajamas" is covered by [NP]:5-7 -> [Det]:5-6 [N]:6-7
Internally, we represent spans with tuples of the kind (symbol, start, end).
def __init__(self, symbol, start, end):
assert isinstance(symbol, Symbol), 'A span takes an instance of Symbol, got %s' % type(symbol)
self._symbol = symbol
self._start = start
self._end = end
def is_terminal(self):
# a span delegates this to an underlying symbol
return self._symbol.is_terminal()
def root(self):
# Spans are hierarchical symbols, thus we delegate
return self._symbol.root()
def obj(self):
The underlying python tuple (Symbol, start, end)
return (self._symbol, self._start, self._end)
def translate(self, target):
return Span(self._symbol.translate(target), self._start, self._end)
def __str__(self):
Prints Symbol with span if Symbol is Nonterminal else without (purely aesthetic distinction)
if self.is_terminal():
return "%s" % (self._symbol)
else:
return "%s:%s-%s" % (self._symbol, self._start, self._end)
def __repr__(self):
return 'Span(%r, %r, %r)' % (self._symbol, self._start, self._end)
def __hash__(self):
return hash((self._symbol, self._start, self._end))
def __eq__(self, other):
return type(self) == type(other) and self._symbol == other._symbol and self._start == other._start and self._end == other._end
def __ne__(self, other):
return not (self == other)
span_S = Span(S, 0, 10)
print(span_S)
span_S = Span(dog, 4, 5)
print(span_S)
spanned_rule = Rule(Span(NP, 2, 4), [Span(Det, 2, 3), Span(NP, 3, 4)], prob=None)
print(spanned_rule)
def build_tree(back, sentence, root, n2i):
Reconstruct the viterbi parse from a filled-in backpointer chart.
It returns a list called derivation which hols the rules that. If you
want to use the function make_nltk_tree you must make sure that the
:param back: a backpointer chart of shape [num_nonterminals, num_words+1, num_words+1]
:param sentence: a list of words
:param root: the root symbol of the tree: Nonterminal('S')
:param n2i: the dictionary mapping from Nonterminals to indices
:return derivation: a derivation: a list of Rules with Span symbols that generate the Viterbi tree.
If you want to draw them with the function that we provide, then this list
should be ordered depth first!
derivation = []
num_words = len(sentence)
# YOUR CODE HERE
raise NotImplementedError
return derivation
derivation = build_tree(back, sentence, S, n2i)
derivation
def make_nltk_tree(derivation):
Return a NLTK Tree object based on the derivation
(list or tuple of Rules)
d = defaultdict(None, ((r.lhs, r.rhs) for r in derivation))
def make_tree(lhs):
return Tree(str(lhs), (str(child) if child not in d else make_tree(child) for child in d[lhs]))
return make_tree(derivation[0].lhs)
tree = make_nltk_tree(derivation)
tree.pretty_print()
# YOUR CODE HERE
# YOUR CODE HERE
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step8: 1. PCFG
Step9: Let's try out the classes by initializing some terminal an nonterminal symbols
Step10: The methods __eq__ and __ne__ make it possible to compare our objects using standard Python syntax. But more importantly
Step11: Note the difference between calling print(NP) and simply calling NP. The first is taken care of by the method __str__ and the second by the method __repr__.
Step12: We can also easily check if our symbol is a terminal or not
Step13: Finally the method __hash__ makes our object hashable, and hence usable in a datastructure like a dictionary.
Step20: Rules
Step21: Just as with Terminal and Nonterminal you can print an instance of Rule, you can access its attributes, and you can hash rules with containers such as dict and set.
Step34: Grammar
Step35: Initialize a grammar
Step36: We can add rules individually with add, or as a list with update
Step37: We can print the grammar
Step38: We can get the set of rewrite rules for a certain LHS symbol.
Step39: We can also iterate through rules in the grammar.
Step40: which can also be done in a more efficient way
Step41: We can access the set of terminals and nonterminals of the grammar
Step42: Finally we can easily access all the binary rules and all the unary rules in the grammar
Step43: For the following sections you will need to have the Natural Language Toolkit (NLTK) installed. We will use a feature of the NLTK toolkit that lets you draw constituency parses. Details for download can be found here
Step44: We can then pretty-print these trees
Step45: Parsing with CKY
Step47: A PCFG for this sentence can be found in the file groucho-grammar.txt. We read this in with the function read_grammar_rules.
Step48: We will also need the following two dictionaries
Step49: The charts
Step50: The following illustrates the way you will use the back chart. In this example, your parser recognized that the words between 0 and 2 form an NP and the words between 2 and the end of the sentence form a VP (and nothing else yet)
Step52: Exercise 1. (80 points)
Step53: Check your CKY
Step57: Exercise 2. (20 points)
Step58: Example usage of Span
Step60: Your final derivation should look like this
Step61: Get your derivation
Step63: Turn the derivation into an NLTK tree
Step64: That's it!
Step65: The man with the telescope
|
1,339
|
<ASSISTANT_TASK:>
Python Code:
import geopandas as gpd
import ipywidgets as widgets
import numpy as np
import pandas as pd
import pyproj
from cartoframes.auth import set_default_credentials
from cartoframes.data.observatory import *
from cartoframes.data.services import Geocoding, Isolines
from cartoframes.viz import *
from IPython.display import clear_output, display
from scipy.spatial.distance import cdist
from shapely import wkt
from shapely.geometry import Point
pd.set_option('display.max_columns', 100)
set_default_credentials('creds.json')
stores = pd.read_csv('https://docs.google.com/spreadsheets/d/1RlOsWN3OBTS0Zhq2lbYvBrwXxSJjpCdWrOWHSqg2JVE/export?gid=0&format=csv')
stores.head()
stores.shape
stores['complete_address'] = stores.apply(lambda row : f"{row['mainaddressline']}, {row['postcode']}",axis=1)
gc = Geocoding()
gdf, metadata = gc.geocode(stores, street='complete_address', city='areaname2',
state='areaname1', country={'value': 'Spain'}, )
gdf.head(2)
stores = gdf[stores.columns.tolist() + ['the_geom']].rename(columns={'the_geom':'geometry'})
stores.head(2)
Map(Layer(stores,
popup_hover=popup_element('name'),
geom_col='geometry'))
stores['name'] = stores['name'].str[18:]
Map(Layer(stores,
style=size_continuous_style('annual_sales'),
popup_hover=[popup_element('name'), popup_element('annual_sales')],
legends=size_continuous_legend('Annual Sales', 'Annual sales in million euros'),
geom_col='geometry'))
iso_service = Isolines()
isochrones_gdf, _ = iso_service.isochrones(stores, [300, 600], mode='walk', exclusive=False)
isochrones_gdf.head()
Map(Layer(isochrones_gdf, geom_col='the_geom', style=basic_style(opacity=0.3)))
stores['iso_5walk'] = isochrones_gdf.loc[isochrones_gdf['data_range'] == 300, 'the_geom'].values
stores['iso_10walk'] = isochrones_gdf.loc[isochrones_gdf['data_range'] == 600, 'the_geom'].values
enrichment = Enrichment()
dataset = Dataset.get('pb_points_of_i_94bda91b')
dataset.variables.to_dataframe().head()
enriched_dataset_gdf = enrichment.enrich_polygons(
stores,
variables=['CLASS_517d6003'],
aggregation='COUNT',
geom_col='iso_5walk'
)
enriched_dataset_gdf.head()
stores['n_pois'] = enriched_dataset_gdf['CLASS'].values
Map(Layer(stores, geom_col='iso_5walk', style=color_bins_style('n_pois')))
enriched_dataset_gdf = enrichment.enrich_polygons(
stores,
variables=['CLASS_517d6003'],
aggregation='COUNT',
geom_col='iso_10walk',
filters={'carto-do.pitney_bowes.pointsofinterest_pointsofinterest_esp_latlon_v1_monthly_v1.CLASS':
"= 'GROCERY STORES'"}
)
stores['n_competitors'] = enriched_dataset_gdf['CLASS'].values
stores.head(3)
dataset = Dataset.get('u360_sociodemogr_28e93b81')
dataset.head()
Variable.get('C02_01_GASTO_M__7ad08d93').to_dict()
vars_enrichment = ['P_T_9be2c6a7',
'P_ED_00_14_M_b66ee9e9', 'P_ED_00_14_H_c6041d66', 'P_ED_15_24_M_5261dc00', 'P_ED_15_24_H_220b288f',
'P_ED_25_44_M_46e29941', 'P_ED_25_44_H_36886dce', 'P_ED_45_64_M_8f3b64f0', 'P_ED_45_64_H_ff51907f',
'P_ED_65_79_M_a8c081ef', 'P_ED_65_79_H_d8aa7560', 'P_ED_80_MAS_M_c1c729f7', 'P_ED_80_MAS_H_b1addd78',
'renta_hab_disp_e4a8896c', 'C02_01_GASTO_M__7ad08d93']
enriched_dataset_gdf = enrichment.enrich_polygons(
stores,
variables=vars_enrichment,
geom_col='iso_5walk'
)
stores = enriched_dataset_gdf
stores.crs = 'epsg:4326'
stores.columns = map(str.lower, stores.columns)
stores.head()
madrid_city_center = Point(-3.703367, 40.416892)
proj_in = pyproj.Proj('epsg:4326')
proj_out = pyproj.Proj('epsg:25830')
project = pyproj.Transformer.from_proj(proj_in, proj_out).transform
stores['dist_cc'] = stores.set_geometry('geometry').to_crs('epsg:25830').distance(
Point(project(madrid_city_center.y, madrid_city_center.x))).values
stores.head(2)
dist_array = cdist(stores.set_geometry('geometry').to_crs('epsg:25830').geometry.apply(lambda point:[point.x, point.y]).tolist(),
stores.set_geometry('geometry').to_crs('epsg:25830').geometry.apply(lambda point:[point.x, point.y]).tolist())
stores['distance_closest_ce'] = list(map(lambda dist_a:np.max(np.partition(dist_a, 2)[:2]), dist_array))
stores.head()
Map(Layer(stores,
geom_col='iso_5walk',
style=color_bins_style('n_competitors'),
legends=color_bins_legend('# Competitors', 'competitos within 10-minute driving isochrone'),
popup_hover=[popup_element('name', 'Name'),
popup_element('n_pois', 'Number of POIs'),
popup_element('n_competitors', 'Number of competitors'),
popup_element('p_t', 'Population coverage'),
popup_element('c02_01_gasto_m_alimentacion_m', 'Groceries spending'),
popup_element('renta_hab_disp', 'income'),
popup_element('distance_closest_ce', 'Distance to closest CE')],
widgets=[histogram_widget('n_pois', 'Number of POIs', description='Select a range of values to filter', buckets=10),
histogram_widget('n_competitors', 'Number of competitors', description='Select a range of values to filter', buckets=10),
histogram_widget('dist_cc', 'Distance to city center', description='Select a range of values to filter', buckets=10),
histogram_widget('distance_closest_ce', 'Distance to closest CE store', description='Select a range of values to filter', buckets=10)]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In order to be able to use the Data Observatory via CARTOframes, you need to set your CARTO account credentials first.
Step2: 1. Load data
Step3: 2. Processing data. Geocoding
Step4: 3. Spatial Data Analysis
Step5: 3. Enrichment
Step6: 3.2 Enrichment
Step7: 3.2.1 POIs
Step8: Number of POIs
Step9: Number of competitors
Step10: 3.2.2 Sociodemographic and socioeconomic data
Step11: Explore and identify the variables of interest
Step12: We decide to enrich our dataframe with the following variables.
Step13: 3.2.3 Spatial lag variables
Step14: 3.2.3.2 Distance to closest Carrefour Express
Step15: 3.3 Visualize enrichment
|
1,340
|
<ASSISTANT_TASK:>
Python Code:
list1 = [10, 12, 14, 16, 18]
print(list1[0]) # Index starts at 0
print(list1[-1]) # Last index at -1
print(list1[0:3]) # Slicing: exclusive of end value
# i.e. get i=(0, 1, .. n-1)
print(list1[3:]) # "slice from i=3 to end"
list1.append(20)
print(list1)
list1.extend([22,24,26])
print(list1)
list1[3]='squirrel'
print(list1)
list1.remove('squirrel')
print(list1)
list1.insert(3,16)
print(list1)
tuple1 = (10, 12, 14, 16, 18)
print(tuple1)
print(tuple1[0])
print(tuple1[1:3])
print(tuple1[3:])
tuple1.append(20)
print(tuple1)
%timeit tuple1=(10,12,14,16,18)
%timeit list1=[10,12,14,16,18]
#%timeit tuple1[3:]
#%timeit list1[3:]
tuple2 = 'Lucy','Ryan'
a, b = tuple2
print('{} is OK, {} is amazing!'.format(a, b))
b, a = a, b
print('{} is OK, {} is amazing!'.format(a, b))
ages = [('Lucy', 25), ('Ryan', 24)]
for name, age in ages:
print('{} is {}.'.format(name, age))
list2 = [list1, tuple1]
list1 = [1, 2 , 3]
print(list2)
# To use numpy, we first have to import the package
import numpy as np
# Can convert a list to an array:
array1=np.array(list1)
print(array1)
# Can make an evenly spaced array between 2 values using linspace or arange.
# linspace takes the number of points to use as an argument and returns floats by default
print(np.linspace(0, 10, 11))
# arange takes the spacing as an argument and returns the type given as the spacing, e.g.
print(np.arange(0, 11, 1.))
print(np.arange(0, 11, 1))
print('The average of array1 is', np.average(array1))
print('The sum of array1 is', np.sum(array1))
# Apply functions
print(np.exp(array1))
print(np.reciprocal(array1))
array2=np.array([float(array1[i]) for i in range(len(array1))])
a=np.reciprocal(array2)
print(np.reciprocal([float(array1[i]) for i in range(len(array1))]))
angles=np.array([0, np.pi/2., np.pi, 3*np.pi/4.])
np.sin(angles)
M1 = np.array([[2,3],[6,3]])
M2 = np.array([[5,6],[2,9]])
print('M1:')
print(M1)
print('M2:')
print(M2)
M3 = M1 * M2 # Element-wise multiplication
print(M3, '\n')
M4 = np.dot(M1, M2) # Matrix multiplication
print(M4)
premier_league_data = np.loadtxt('example.csv')
print(premier_league_data)
print(type(premier_league_data[0][0]))
price_table = {'apples': 50, 'pears': 60, 'bananas': 20}
print(price_table)
fruit = [('apples', 50), ('bananas', 20), ('pears', 60)]
price_table1 = dict(fruit)
print(price_table==price_table1)
# NOTE: the order when you define a dictionary doesn't matter, it's ordered with a hashtable not
# with indexing lists and tuples
# To get a value out, you use square brackets but instead of an index, you use the key:
akey = 'apples'
print("The price of {} is {}p.".format(akey, price_table[akey]))
# Trying to use an index wouldn't work:
print(price_table[0])
price_table.keys()
# Example usage:
shopping_list = [('apples', 50), ('bananas', 20)]
total = 0
for item, quantity in shopping_list:
price = price_table[item]
print('Adding {} {} at {}p each.'.format(quantity, item, price))
total += price * quantity
print('Total shopping cost is £%.2f.' %(total/100.))
price_table['kiwis']=30
print(price_table)
del price_table['bananas']
print(price_table)
price_table['apples']=25
print(price_table)
# Iterating over the dictionary will iterate over its keys
for key in price_table:
print("{} cost {}p".format(key, price_table[key]))
# Or use the items method:
for key, val in price_table.items():
print("{} cost {}p".format(key, val))
def square_root(x):
Useful docstring: Calculates and returns square root of x
i = x ** 0.5
return i
x = 10
y = square_root(x)
print('The square root of {} is {}'.format(x, y))
# We can set a default value to the function
def square_root(x=20):
i = x ** 0.5
return i
print(square_root())
# Loops, functions and appending
mylist = []
for i in range(1,5):
mylist.append(square_root(i))
print(mylist)
def update_integer(i):
# attempt to update i (integers) are immutable
i += 1
def update_list_end(arglist):
arglist[-1] = 50 # Lists are mutable: updates args directly!
a = 1
update_integer(a)
print(a)
mylist = [0, 1, 2, 3, 4]
update_list_end(mylist)
print(mylist)
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi)
y = np.sin(x)
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
ax.plot(x, y,'o-')
ax.margins(0.1)
ax.set_title('2D plot')
ax.set_xlabel('$x$')
ax.set_ylabel(r'$sin(x)$')
ax.plot()
xtick_values = np.linspace(0, 2*np.pi, 5)
xtick_labels = ['$0$', r'$\frac{\pi}{2}$', r'$\pi$', r'$\frac{3\pi}{2}$',
r'$2\pi$']
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111); ax.plot(x, y,'-o')
ax.set_title('2D plot')
ax.margins(0.1)
ax.set_xlabel('$x$'); ax.set_ylabel(r'$sin(x)$')
ax.set_xticks(xtick_values)
ax.set_xticklabels(xtick_labels, fontsize=25);
f1 = open('textfile.txt', 'r+')
print(f1.read())
f1.close()
with open('textfile.txt', 'r+') as f1:
print(f1.readline())
print(f1.readline())
with open('textfile.txt', 'r+') as f1:
print(f1.readlines())
with open('textfile.txt', 'r+') as f1:
print(list(f1))
with open('textfile.txt', 'r+') as f1:
for line in f1:
print(line)
with open('textfile.txt', 'r+') as f1:
f1.write('Hello')
print(f1.readline())
f1.write('Second Hello')
print(f1.read())
with open('textfile.txt', 'r+') as f1:
print(f1.read())
with open('textfile.txt', 'r+') as f1:
lines = f1.readlines()
del lines[-1]
lines[2] = 'I have changed the third line\n'
with open('textfile.txt', 'w') as f1:
f1.writelines(lines)
f1.seek(0)
with open('textfile.txt') as f1:
print(f1.read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font color='midnightblue'> Example
Step2: <font color='midnightblue'> Example
Step3: <font color='mediumblue'> Tuples
Step4: <font color='midnightblue'> Example
Step5: <font color='midnightblue'> Example
Step6: <font color='midnightblue'> Example
Step7: <font color='mediumblue'> Numpy Arrays
Step8: <font color='midnightblue'> Example
Step9: #### <font color='midnightblue'> Example
Step10: <font color='midnightblue'> Example
Step11: <font color='mediumblue'> Dictionaries
Step12: <font color='midnightblue'> Example
Step13: <font color='midnightblue'> Example
Step15: <font color='dodgerblue'> Functions
Step16: <font color='midnightblue'> Example
Step17: <font color='dodgerblue'> Plotting
Step18: <font color='midnightblue'> Example
Step19: <font color='dodgerblue'> Reading from and writing to files
|
1,341
|
<ASSISTANT_TASK:>
Python Code:
import datetime
import gc
import numpy as np
import os
import random
from scipy import misc
import string
import time
import sys
import sklearn.metrics as skm
import collections
from sklearn.svm import SVC
import matplotlib
matplotlib.use('Agg')
from matplotlib import pyplot as plt
from sklearn import metrics
import dwdii_bc_model_helper_old as bc
random.seed(20275)
np.set_printoptions(precision=2)
imagePath = "png"
trainDataPath = "data/ddsm_train.csv"
testDataPath = "data/ddsm_test.csv"
imgResize = (150, 150)
os.listdir('data')
metaData, meta2, mCounts = bc.load_training_metadata(trainDataPath, balanceViaRemoval=True, verbose=True)
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_data, Y_data = bc.load_data(trainDataPath, imagePath,
maxData = maxData,
verboseFreq = 50,
imgResize=imgResize)
print X_data.shape
print Y_data.shape
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_test, Y_test = bc.load_data(testDataPath, imagePath,
maxData = maxData,
verboseFreq = 50,
imgResize=imgResize)
print X_test.shape
print Y_test.shape
X_train = X_data
Y_train = Y_data
print X_train.shape
print X_test.shape
print Y_train.shape
print Y_test.shape
def yDist(y):
bcCounts = collections.defaultdict(int)
for a in range(0, y.shape[0]):
bcCounts[y[a][0]] += 1
return bcCounts
print "Y_train Dist: " + str(yDist(Y_train))
print "Y_test Dist: " + str(yDist(Y_test))
# Load the bc array for our count in the model definition
bcTypes = bc.bcNumerics()
print bcTypes
print len(bcTypes)
X_train_s = X_train.reshape((2132,-1))
X_test_s = X_test.reshape((536,-1))
Y_train_s = Y_train.ravel()
model = SVC(C=1.0, gamma=0.001, kernel='rbf')
model.fit(X_train_s,Y_train_s)
predicted = model.predict(X_test_s)
expected = Y_test
svm_matrix = skm.confusion_matrix(Y_test, predicted)
svm_matrix
print metrics.accuracy_score(expected,predicted)
numBC = bc.numericBC()
class_names = numBC.values()
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names,
title='Confusion Matrix without normalization')
plt.savefig('raw_class3_o_norm.png')
from IPython.display import Image
Image(filename='raw_class3_o_norm.png')
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=True,
title='Confusion Matrix with normalization')
plt.savefig('raw_class3_norm.png')
# Load the image we just saved
from IPython.display import Image
Image(filename='raw_class3_norm.png')
imagePath = "DDSM_threshold"
trainDataPath = "data/ddsm_train.csv"
testDataPath = "data/ddsm_test.csv"
imgResize = (150, 150)
os.listdir('data')
metaData, meta2, mCounts = bc.load_training_metadata(trainDataPath, balanceViaRemoval=True, verbose=True)
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_data, Y_data = bc.load_data(trainDataPath, imagePath,
maxData = maxData,
verboseFreq = 50,
imgResize=imgResize)
print X_data.shape
print Y_data.shape
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_test, Y_test = bc.load_data(testDataPath, imagePath,
maxData = maxData,
verboseFreq = 50,
imgResize=imgResize)
print X_test.shape
print Y_test.shape
X_train = X_data
Y_train = Y_data
print X_train.shape
print X_test.shape
print Y_train.shape
print Y_test.shape
def yDist(y):
bcCounts = collections.defaultdict(int)
for a in range(0, y.shape[0]):
bcCounts[y[a][0]] += 1
return bcCounts
print "Y_train Dist: " + str(yDist(Y_train))
print "Y_test Dist: " + str(yDist(Y_test))
# Load the bc array for our count in the model definition
bcTypes = bc.bcNumerics()
print bcTypes
print len(bcTypes)
X_train_s = X_train.reshape((2024,-1))
X_test_s = X_test.reshape((516,-1))
Y_train_s = Y_train.ravel()
model = SVC(C=1.0, gamma=0.001, kernel='rbf')
model.fit(X_train_s,Y_train_s)
predicted = model.predict(X_test_s)
expected = Y_test
svm_matrix = skm.confusion_matrix(Y_test, predicted)
svm_matrix
print metrics.accuracy_score(expected,predicted)
numBC = bc.numericBC()
class_names = numBC.values()
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names,
title='Confusion Matrix without normalization')
plt.savefig('threshold_class3_o_norm.png')
from IPython.display import Image
Image(filename='threshold_class3_o_norm.png')
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=True,
title='Confusion Matrix with normalization')
plt.savefig('threshold_class3_norm.png')
# Load the image we just saved
from IPython.display import Image
Image(filename='threshold_class3_norm.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Raw DDSM images
Step2: Threshold Images
|
1,342
|
<ASSISTANT_TASK:>
Python Code:
ph_sel_name = "Dex"
data_id = "17d"
# ph_sel_name = "all-ph"
# data_id = "7d"
from fretbursts import *
init_notebook()
from IPython.display import display
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'),
'DexDem': Ph_sel(Dex='Dem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
d = loader.photon_hdf5(filename=files_dict[data_id])
d.ph_times_t, d.det_t
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
plot_alternation_hist(d)
loader.alex_apply_period(d)
d
d.time_max
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel)
d.burst_search(**bs_kws)
th1 = 30
ds = d.select_bursts(select_bursts.size, th1=30)
bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True)
.round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4}))
bursts.head()
burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'
.format(sample=data_id, th=th1, **bs_kws))
burst_fname
bursts.to_csv(burst_fname)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print ('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
def hsm_mode(s):
Half-sample mode (HSM) estimator of `s`.
`s` is a sample from a continuous distribution with a single peak.
Reference:
Bickel, Fruehwirth (2005). arXiv:math/0505419
s = memoryview(np.sort(s))
i1 = 0
i2 = len(s)
while i2 - i1 > 3:
n = (i2 - i1) // 2
w = [s[n-1+i+i1] - s[i+i1] for i in range(n)]
i1 = w.index(min(w)) + i1
i2 = i1 + n
if i2 - i1 == 3:
if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]:
i2 -= 1
elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]:
i1 += 1
else:
i1 = i2 = i1 + 1
return 0.5*(s[i1] + s[i2])
E_pr_do_hsm = hsm_mode(ds_do.E[0])
print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100))
E_fitter = bext.bursts_fitter(ds_do, weights=None)
E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03))
E_fitter.fit_histogram(model=mfit.factory_gaussian())
E_fitter.params
res = E_fitter.fit_res[0]
res.params.pretty_print()
E_pr_do_gauss = res.best_values['center']
E_pr_do_gauss
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_fitter.calc_kde(bandwidth=bandwidth)
E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1])
E_pr_do_kde = E_fitter.kde_max_pos[0]
E_pr_do_kde
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False)
plt.axvline(E_pr_do_hsm, color='m', label='HSM')
plt.axvline(E_pr_do_gauss, color='k', label='Gauss')
plt.axvline(E_pr_do_kde, color='r', label='KDE')
plt.xlim(0, 0.3)
plt.legend()
print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' %
(E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
ds_fret.fit_E_m(weights='size')
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
sample = data_id
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n')
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Step5: Laser alternation selection
Step6: We need to define some parameters
Step7: We should check if everithing is OK with an alternation histogram
Step8: If the plot looks good we can apply the parameters with
Step9: Measurements infos
Step10: Or check the measurements duration
Step11: Compute background
Step12: Burst search and selection
Step14: Donor Leakage fit
Step15: Gaussian Fit
Step16: KDE maximum
Step17: Leakage summary
Step18: Burst size distribution
Step19: Fret fit
Step20: Weighted mean of $E$ of each burst
Step21: Gaussian fit (no weights)
Step22: Gaussian fit (using burst size as weights)
Step23: Stoichiometry fit
Step24: The Maximum likelihood fit for a Gaussian population is the mean
Step25: Computing the weighted mean and weighted standard deviation we get
Step26: Save data to file
Step27: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step28: This is just a trick to format the different variables
|
1,343
|
<ASSISTANT_TASK:>
Python Code:
%run ../bst/bst.py
%load ../bst/bst.py
def bfs(self, visit_func):
# TODO: Implement me
pass
%run ../utils/results.py
# %load test_bfs.py
from nose.tools import assert_equal
class TestBfs(object):
def __init__(self):
self.results = Results()
def test_bfs(self):
node = Node(5)
insert(node, 2)
insert(node, 8)
insert(node, 1)
insert(node, 3)
bfs(node, self.results.add_result)
assert_equal(str(self.results), '[5, 2, 8, 1, 3]')
print('Success: test_bfs')
def main():
test = TestBfs()
test.test_bfs()
if __name__ == '__main__':
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unit Test
|
1,344
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated
!pip install --quiet --upgrade tensorflow-model-optimization
%load_ext tensorboard
import functools
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
from tensorflow_model_optimization.python.core.internal import tensor_encoding as te
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
# This value only applies to EMNIST dataset, consider choosing appropriate
# values if switching to other datasets.
MAX_CLIENT_DATASET_SIZE = 418
CLIENT_EPOCHS_PER_ROUND = 1
CLIENT_BATCH_SIZE = 20
TEST_BATCH_SIZE = 500
emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data(
only_digits=True)
def reshape_emnist_element(element):
return (tf.expand_dims(element['pixels'], axis=-1), element['label'])
def preprocess_train_dataset(dataset):
Preprocessing function for the EMNIST training dataset.
return (dataset
# Shuffle according to the largest client dataset
.shuffle(buffer_size=MAX_CLIENT_DATASET_SIZE)
# Repeat to do multiple local epochs
.repeat(CLIENT_EPOCHS_PER_ROUND)
# Batch to a fixed client batch size
.batch(CLIENT_BATCH_SIZE, drop_remainder=False)
# Preprocessing step
.map(reshape_emnist_element))
emnist_train = emnist_train.preprocess(preprocess_train_dataset)
def create_original_fedavg_cnn_model(only_digits=True):
The CNN model used in https://arxiv.org/abs/1602.05629.
data_format = 'channels_last'
max_pool = functools.partial(
tf.keras.layers.MaxPooling2D,
pool_size=(2, 2),
padding='same',
data_format=data_format)
conv2d = functools.partial(
tf.keras.layers.Conv2D,
kernel_size=5,
padding='same',
data_format=data_format,
activation=tf.nn.relu)
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),
conv2d(filters=32),
max_pool(),
conv2d(filters=64),
max_pool(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10 if only_digits else 62),
tf.keras.layers.Softmax(),
])
return model
# Gets the type information of the input data. TFF is a strongly typed
# functional programming framework, and needs type information about inputs to
# the model.
input_spec = emnist_train.create_tf_dataset_for_client(
emnist_train.client_ids[0]).element_spec
def tff_model_fn():
keras_model = create_original_fedavg_cnn_model()
return tff.learning.from_keras_model(
keras_model=keras_model,
input_spec=input_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
federated_averaging = tff.learning.build_federated_averaging_process(
model_fn=tff_model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))
#@title Load utility functions
def format_size(size):
A helper function for creating a human-readable size.
size = float(size)
for unit in ['B','KiB','MiB','GiB']:
if size < 1024.0:
return "{size:3.2f}{unit}".format(size=size, unit=unit)
size /= 1024.0
return "{size:.2f}{unit}".format(size=size, unit='TiB')
def set_sizing_environment():
Creates an environment that contains sizing information.
# Creates a sizing executor factory to output communication cost
# after the training finishes. Note that sizing executor only provides an
# estimate (not exact) of communication cost, and doesn't capture cases like
# compression of over-the-wire representations. However, it's perfect for
# demonstrating the effect of compression in this tutorial.
sizing_factory = tff.framework.sizing_executor_factory()
# TFF has a modular runtime you can configure yourself for various
# environments and purposes, and this example just shows how to configure one
# part of it to report the size of things.
context = tff.framework.ExecutionContext(executor_fn=sizing_factory)
tff.framework.set_default_context(context)
return sizing_factory
def train(federated_averaging_process, num_rounds, num_clients_per_round, summary_writer):
Trains the federated averaging process and output metrics.
# Create a environment to get communication cost.
environment = set_sizing_environment()
# Initialize the Federated Averaging algorithm to get the initial server state.
state = federated_averaging_process.initialize()
with summary_writer.as_default():
for round_num in range(num_rounds):
# Sample the clients parcitipated in this round.
sampled_clients = np.random.choice(
emnist_train.client_ids,
size=num_clients_per_round,
replace=False)
# Create a list of `tf.Dataset` instances from the data of sampled clients.
sampled_train_data = [
emnist_train.create_tf_dataset_for_client(client)
for client in sampled_clients
]
# Round one round of the algorithm based on the server state and client data
# and output the new state and metrics.
state, metrics = federated_averaging_process.next(state, sampled_train_data)
# For more about size_info, please see https://www.tensorflow.org/federated/api_docs/python/tff/framework/SizeInfo
size_info = environment.get_size_info()
broadcasted_bits = size_info.broadcast_bits[-1]
aggregated_bits = size_info.aggregate_bits[-1]
print('round {:2d}, metrics={}, broadcasted_bits={}, aggregated_bits={}'.format(round_num, metrics, format_size(broadcasted_bits), format_size(aggregated_bits)))
# Add metrics to Tensorboard.
for name, value in metrics['train']._asdict().items():
tf.summary.scalar(name, value, step=round_num)
# Add broadcasted and aggregated data size to Tensorboard.
tf.summary.scalar('cumulative_broadcasted_bits', broadcasted_bits, step=round_num)
tf.summary.scalar('cumulative_aggregated_bits', aggregated_bits, step=round_num)
summary_writer.flush()
# Clean the log directory to avoid conflicts.
!rm -R /tmp/logs/scalars/*
# Set up the log directory and writer for Tensorboard.
logdir = "/tmp/logs/scalars/original/"
summary_writer = tf.summary.create_file_writer(logdir)
train(federated_averaging_process=federated_averaging, num_rounds=10,
num_clients_per_round=10, summary_writer=summary_writer)
%tensorboard --logdir /tmp/logs/scalars/ --port=0
def broadcast_encoder_fn(value):
Function for building encoded broadcast.
spec = tf.TensorSpec(value.shape, value.dtype)
if value.shape.num_elements() > 10000:
return te.encoders.as_simple_encoder(
te.encoders.uniform_quantization(bits=8), spec)
else:
return te.encoders.as_simple_encoder(te.encoders.identity(), spec)
def mean_encoder_fn(value):
Function for building encoded mean.
spec = tf.TensorSpec(value.shape, value.dtype)
if value.shape.num_elements() > 10000:
return te.encoders.as_gather_encoder(
te.encoders.uniform_quantization(bits=8), spec)
else:
return te.encoders.as_gather_encoder(te.encoders.identity(), spec)
encoded_broadcast_process = (
tff.learning.framework.build_encoded_broadcast_process_from_model(
tff_model_fn, broadcast_encoder_fn))
encoded_mean_process = (
tff.learning.framework.build_encoded_mean_process_from_model(
tff_model_fn, mean_encoder_fn))
federated_averaging_with_compression = tff.learning.build_federated_averaging_process(
tff_model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0),
broadcast_process=encoded_broadcast_process,
aggregation_process=encoded_mean_process)
logdir_for_compression = "/tmp/logs/scalars/compression/"
summary_writer_for_compression = tf.summary.create_file_writer(
logdir_for_compression)
train(federated_averaging_process=federated_averaging_with_compression,
num_rounds=10,
num_clients_per_round=10,
summary_writer=summary_writer_for_compression)
%tensorboard --logdir /tmp/logs/scalars/ --port=0
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: フェデレーテッドラーニングリサーチの TFF
Step2: TFF が動作していることを確認します。
Step4: 入力データを準備する
Step6: モデルを定義する
Step7: モデルのトレーニングとトレーニングメトリックの出力
Step11: では、フェデレーテッドアベレージングアルゴリズムを実行しましょう。TFF の観点からフェデレーテッドアベレージングアルゴリズムを実行するには、次のようになります。
Step12: 上記に示されるルートログディレクトリで TensorBoard を起動すると、トレーニングメトリックが表示されます。データの読み込みには数秒かかることがあります。Loss と Accuracy を除き、ブロードキャストされ集約されたデータの量も出力されます。ブロードキャストされたデータは、各クライアントにサーバーがプッシュしたテンソルで、集約データとは各クライアントがサーバーに返すテンソルを指します。
Step15: カスタムブロードキャストと集約関数を構築する
Step16: TFF は、エンコーダ関数を tff.learning.build_federated_averaging_process API が消費できる形式に変換する API を提供しています。tff.learning.framework.build_encoded_broadcast_from_model と tff.learning.framework.build_encoded_mean_from_model を使用することで、tff.learning.build_federated_averaging_process の broadcast_process と aggregation_process 引数に渡して、非可逆圧縮アルゴリズムでフェデレーテッドアベレージングアルゴリズムを作成するための関数を 2 つ作成することができます。
Step17: もう一度モデルをトレーニングする
Step18: もう一度 TensorBoard を起動して、2 つの実行のトレーニングメトリックを比較します。
|
1,345
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from simtk.openmm.app import *
from simtk.openmm import *
from simtk.unit import *
from sys import stdout
import matplotlib, matplotlib.pyplot as plt
import pandas as pd
from io import StringIO
import numpy as np, os
matplotlib.rcParams.update({'font.size': 12})
# simulation parameters
nsteps = 50000 # number of MD steps each of 2 fs
nwater = 1000 # number of water molecules
cutoff = 1.2*nanometers # pair potential cutoff
platform = 'OpenCL' # 'OpenCL', 'CUDA', 'CPU'
rho = 216/18.6206**3 # density from gromacs spc216.gro
boxlen = (nwater/rho)**(1/3)
print('box length = ', boxlen*angstrom)
print('half box = ', boxlen*angstrom/2)
%%writefile hoh.pdb
CRYST1 30.000 30.000 30.000 90.00 90.00 90.00 P 1 1
ATOM 1 OW HOH A 1 27.552 11.051 7.172 1.00 0.00 O
ATOM 2 HW1 HOH A 1 27.900 10.721 8.050 1.00 0.00 H
ATOM 3 HW2 HOH A 1 26.606 11.355 7.281 1.00 0.00 H
END
# write input file for packmol
PACKMOL_INPUT =
tolerance %f
filetype pdb
output %s
# hoh will be put in a box
# defined by the minimum coordinates x, y and z = 0. 0. 0. and maximum
# coordinates box_size box_size box_size That is, they will be put in a cube of side
# box_size (the keyword "inside cube 0. 0. 0. box_size") could be used as well.
structure %s
number %d
inside box 0. 0. 0. %f %f %f
add_box_sides 0.0
end structure
% (2.,'water.pdb','hoh.pdb', nwater, boxlen, boxlen, boxlen)
!echo '$PACKMOL_INPUT' > packmol_input.txt
!packmol < packmol_input.txt > /dev/null
elec_to_kJmol = (
constants.elementary_charge**2 *
AVOGADRO_CONSTANT_NA / (4*np.pi*1.0*8.854187817e-12
* constants.farad/constants.meter)).value_in_unit(kilojoule_per_mole*nanometer)
def findForce(system, forcetype, add=True):
Finds a specific force in the system force list - added if not found.
for force in system.getForces():
if isinstance(force, forcetype):
return force
if add==True:
system.addForce(forcetype())
return findForce(system, forcetype)
return None
def make_qpotential_system(topology, Rc=0.9*nanometers, moments=1000):
''' return a q-potential system (TO '''
ff = ForceField('spce-custom.xml') # this will create a CustomNonbondedForce
system = ff.createSystem(
topology, nonbondedMethod=CutoffPeriodic,
nonbondedCutoff=Rc, constraints=HBonds, rigidWater=True)
def qPochhammerSymbol( Rc, moments ):
if isinstance(Rc, Quantity):
Rc = Rc / nanometer # strip unit
qP = 1.0
r = np.linspace(0, Rc, 5000)
for i in range( moments ):
qP *= (1 - (r/Rc)**(i+1) )
return qP
qP = Continuous1DFunction( qPochhammerSymbol(Rc, moments), 0*nanometers, Rc)
nonbonded = findForce(system, CustomNonbondedForce)
nonbonded.addTabulatedFunction( 'qP', qP ) # 'qP(r)' can now be used in energy function
nonbonded.addGlobalParameter( 'f', elec_to_kJmol ) # convert to kJ/mol
nonbonded.setEnergyFunction(
'f * charge1 * charge2 * qP(r)/r' \
' + 4 * epsilon * ( (sigma/r)^12 - (sigma/r)^6 )' \
' ; sigma = 0.5 * ( sigma1+sigma2 ); epsilon = sqrt( epsilon1*epsilon2 )'
)
print('qpot')
print(' periodic boundaries: ', nonbonded.usesPeriodicBoundaryConditions())
print(' switching function: ', nonbonded.getUseSwitchingFunction())
print(' long-range correction:', nonbonded.getUseLongRangeCorrection())
print(' cutoff distance: ', nonbonded.getCutoffDistance())
print(' energy function: ', nonbonded.getEnergyFunction())
return system
def make_grekiss_system(topology, Rc=0.9*nanometers):
''' return a grekis system '''
ff = ForceField('spce-custom.xml') # this will create a CustomNonbondedForce
system = ff.createSystem(
topology, nonbondedMethod=CutoffPeriodic,
nonbondedCutoff=Rc, constraints=HBonds, rigidWater=True)
nonbonded = findForce(system, CustomNonbondedForce)
nonbonded.addGlobalParameter( 'Rc', Rc ) # 'Rc' can now be used in energy function
nonbonded.addGlobalParameter( 'f', elec_to_kJmol ) # 'lB' bjerrum length in nm and kJ/mol
nonbonded.setEnergyFunction(
'f * charge1 * charge2 * 1/r * ( 1 - 1.75*r/Rc + 5.25*(r/Rc)^5 - 7*(r/Rc)^6 + 2.5*(r/Rc)^7 )' \
' + 4 * epsilon * ( (sigma/r)^12 - (sigma/r)^6 )' \
' ; sigma = 0.5 * (sigma1+sigma2); epsilon = sqrt( epsilon1*epsilon2 )'
)
print('fanourgakis')
print(' periodic boundaries: ', nonbonded.usesPeriodicBoundaryConditions())
print(' switching function: ', nonbonded.getUseSwitchingFunction())
print(' long-range correction:', nonbonded.getUseLongRangeCorrection())
print(' cutoff distance: ', nonbonded.getCutoffDistance())
print(' energy function: ', nonbonded.getEnergyFunction())
return system
def make_yonezawa_system(topology, Rc=0.9*nanometers):
''' return a yonezawa system '''
ff = ForceField('spce-custom.xml') # this will create a CustomNonbondedForce
system = ff.createSystem(
topology, nonbondedMethod=CutoffPeriodic,
nonbondedCutoff=Rc, constraints=HBonds, rigidWater=True)
nonbonded = findForce(system, CustomNonbondedForce)
nonbonded.addGlobalParameter( 'Rc', Rc ) # 'Rc' can now be used in energy function
nonbonded.addGlobalParameter( 'f', elec_to_kJmol ) # 'lB' bjerrum length in nm and kJ/mol
nonbonded.setEnergyFunction(
'f * charge1 * charge2 * ( 1/r - 1/Rc + 1/Rc^2 * (r-Rc) )' \
' + 4 * epsilon * ( (sigma/r)^12 - (sigma/r)^6 )' \
' ; sigma = 0.5 * ( sigma1+sigma2 ); epsilon = sqrt( epsilon1 * epsilon2 )'
)
print('yonezawa')
print(' periodic boundaries: ', nonbonded.usesPeriodicBoundaryConditions())
print(' switching function: ', nonbonded.getUseSwitchingFunction())
print(' long-range correction:', nonbonded.getUseLongRangeCorrection())
print(' cutoff distance: ', nonbonded.getCutoffDistance())
print(' energy function: ', nonbonded.getEnergyFunction())
return system
def make_ewald_system(topology, Rc=0.9*nanometers, method=Ewald, ljcorr=False):
''' returns an Ewald system '''
ff = ForceField('spce.xml') # this will create a NonbondedForce
system = ff.createSystem(
topology, nonbondedMethod=method,
nonbondedCutoff=Rc, constraints=HBonds, rigidWater=True)
nonbonded = findForce(system, NonbondedForce)
nonbonded.setUseDispersionCorrection( ljcorr )
print('ewald')
print(' err. tolerance: ', nonbonded.getEwaldErrorTolerance())
print(' LJ switching function:', nonbonded.getUseSwitchingFunction())
print(' LJ correction: ', nonbonded.getUseDispersionCorrection())
print(' cutoff distance: ', nonbonded.getCutoffDistance())
return system
pdb = PDBFile('water.pdb')
# here we define our list of system incl. plotting properties. All files from simulation
# and analysis will be prefixed with 'qpot.', 'ewald.', etc.
systemlist = {
'yonezawa': {
'system': make_yonezawa_system( pdb.topology, Rc=cutoff ), 'npt': False,
'run': False, 'color': 'blue', 'label': 'Yonezawa'
},
'grekiss': {
'system': make_grekiss_system( pdb.topology, Rc=cutoff ), 'npt': False,
'run': False, 'color': 'green', 'label': 'Fanourgakis'
},
'qpot' : {
'system': make_qpotential_system( pdb.topology, Rc=cutoff ), 'npt': False,
'run': False, 'color': 'red', 'label': '$q$ ($n=\infty$)'
},
'qpot2' : {
'system': make_qpotential_system( pdb.topology, Rc=cutoff, moments=2 ), 'npt': False,
'run': False, 'color': 'orange', 'label': '$q$ ($n=2$)'
},
'qpot3' : {
'system': make_qpotential_system( pdb.topology, Rc=cutoff, moments=3 ), 'npt': False,
'run': False, 'color': 'magenta', 'label': '$q$ ($n=3$)'
},
'ewald': {
'system': make_ewald_system( pdb.topology, Rc=cutoff ), 'npt': False,
'run': False, 'color': 'black', 'label': 'Ewald'
}
}
for name, prop in systemlist.items(): # loop over systems
if prop['run']:
print(name)
if prop['npt']: # NVT -> NPT ensemble ?
print(' adding barostat.')
barostat = MonteCarloBarostat(1.0*bar, 298.15*kelvin, 25)
system.addForce(barostat)
integrator = LangevinIntegrator( 298.15*kelvin, 1.0/picoseconds, 2*femtoseconds )
integrator.setConstraintTolerance(0.00001)
_platform = Platform.getPlatformByName( platform ) # CPU, OpenCL
sim = Simulation(pdb.topology, prop['system'], integrator, _platform)
sim.context.setPositions(pdb.positions) # set particle positions
if os.path.isfile( name+'.chk' ):
with open( name+'.chk', 'rb') as f:
print(' loading restart file.')
sim.context.loadCheckpoint( f.read() )
else:
print(' minimizing energy...')
sim.reporters.clear()
%time sim.minimizeEnergy( tolerance=50*kilojoule/mole, maxIterations=1000 )
sim.context.setVelocitiesToTemperature( 298.15*kelvin ) # initial random velocities
print(' running Production...')
sim.reporters.clear()
sim.reporters.append( DCDReporter( name+'.dcd', 1000) )
sim.reporters.append( StateDataReporter(name+'.energy', 1000, step=True, potentialEnergy=True,
temperature=True, density=True) )
sim.reporters.append( StateDataReporter(stdout, 1000, step=True, potentialEnergy=True,
temperature=True, density=True, separator='\t',
progress=True,
totalSteps = nsteps) )
%time sim.step( nsteps )
with open( name+'.chk', 'wb') as f:
print(' saving restart file.')
f.write( sim.context.createCheckpoint() )
# save final configuration to PDB file
# todo: something is fishy here... water molecules are outside the box
# in pdb although the trajectory looks fine. Å->nm problem?
positions = sim.context.getState( getPositions=True ).getPositions()
PDBFile.writeFile(sim.topology, positions, open( name+'.pdb', 'w'))
print()
import mdtraj as md
for name, prop in systemlist.items():
print(name)
traj = md.load(name+'.dcd', top=name+'.pdb')
sel = traj.top.select('name O')
OOpairs = traj.top.select_pairs('name O', 'name O')
rdf = md.compute_rdf( traj, pairs=OOpairs, bin_width=0.005, r_range=[0.2, boxlen/2/10] )
plt.plot( rdf[0], rdf[1], label=prop['label'], color=prop['color'], lw=1)
plt.xlim(0.2, 0.8)
plt.xlabel(r'$r$/nm')
plt.ylabel(r'$g(r)$')
plt.legend(loc=0, frameon=False, fontsize='medium')
for name, prop in systemlist.items():
step, Upot, temp, density = np.loadtxt(name+'.energy', skiprows=1, delimiter=',', unpack=True)
Upot = Upot / pdb.topology.getNumResidues() # -> energy per water molecule
plt.plot( step, Upot, label=prop['label'], color=prop['color'], lw=0.5 )
plt.plot( [0,step.max()], [Upot.mean(), Upot.mean()], color=prop['color'], ls='--', lw=0.5 )
prop['energy'] = Upot.mean()
plt.xlabel(r'steps')
plt.ylabel(r'$U_{pot}/N$ (kJ/mol)')
plt.legend(loc=1, frameon=True, fontsize='small')
def getChargeVector( force, stripunit=True ):
''' Extract particle charge array from force object if available
Note that this function assumes that charge is located at parameter
position 0 which is default in NonbondedForce while it can be anywhere
in CustomNonbondedForce, following the order in which
parameters were added in python/xml.
'''
if isinstance(force, (NonbondedForce, CustomNonbondedForce) ):
if 'getParticleParameters' in dir(force):
chargelist = []
for i in range( force.getNumParticles() ):
charge = force.getParticleParameters( i )[ 0 ]
if stripunit:
if isinstance(charge, Quantity):
charge = charge / elementary_charge # strip unit
chargelist.append( charge )
return chargelist
return None
for name, prop in systemlist.items():
for force in prop['system'].getForces():
if isinstance(force, (NonbondedForce, CustomNonbondedForce) ):
charges = getChargeVector( force )
traj = md.load(name+'.dcd', top=name+'.pdb')
l = len(traj)
l2= int(l/2)
eps12= md.static_dielectric( traj[1:l], charges, temperature=298.15) # all
eps1 = md.static_dielectric( traj[1:l2], charges, temperature=298.15) # 1st half
eps2 = md.static_dielectric( traj[l2:l], charges, temperature=298.15) # 2nd half
prop['diel'] = eps12
print(name, ': diel. const.', eps12, eps1, eps2)
pd.DataFrame(systemlist).T
%%bash
rm -fR
echo -n "6\n" | gmx -quiet -nobackup pdb2gmx -f water.pdb -o water.gro -p water.top -n water.ndx -water spce
gmx -quiet -nobackup grompp -f water.mdp -c water.gro -p water.top -o water.tpr -maxwarn 10
for name, prop in systemlist.items():
print(name)
traj = md.load(name+'.dcd', top=name+'.pdb')
traj.save_xtc(name+'.xtc')
!echo -n "0\n" | gmx -quiet -nobackup dipoles -f $name'.xtc' -s water.tpr -temp 298.15 -g $name'.gkr.xvg'
print()
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, figsize=(8, 7))
for name, prop in systemlist.items():
r, G, cos, hOO, gOO, energy = np.loadtxt(name+'.gkr.xvg', skiprows=27, unpack=True)
ax1.plot(r, G, label=prop['label'], color=prop['color'], lw=1 )
ax1.set_xlabel('$r$/nm')
ax1.set_ylabel('$G_k(r)$')
ax2.plot(r, cos, label=prop['label'], color=prop['color'], lw=1 )
ax2.set_xlabel('$r$/nm')
ax2.set_ylabel('cosine something...')
ax2.set_xlim(0.2, 0.8)
ax2.set_ylim(-0.1, 0.55)
ax3.plot(r, hOO, label=prop['label'], color=prop['color'], lw=1 )
ax3.set_xlabel('$r$/nm')
ax3.set_ylabel('hOO')
ax3.set_xlim(0.2, 0.8)
ax4.plot(r, gOO, label=prop['label'], color=prop['color'], lw=1 )
ax4.set_xlabel('$r$/nm')
ax4.set_ylabel('gOO')
ax4.set_xlim(0.2, 0.8)
ax2.legend(loc=0, frameon=False, fontsize='medium')
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Create initial box of water
Step4: Create OpenMM System classes for a variety of long-range correction schemes
Step5: Here we perform the actual MD simulation, incl. minimization of all defined systems
Step6: Oxygen-oxygen radial distribution function
Step7: Average potential energy
Step8: Static dielectric constant
Step9: Gromacs analysis
Step10: Convert trajectory to XTC format and calculate dipolar correlations
|
1,346
|
<ASSISTANT_TASK:>
Python Code:
data = pd.read_csv('/Users/Frankie/Documents/Dissertation/Data/pancreatic/24hProbeExpressionValues.csv')
data[:5]
data = data.T
label = pd.read_csv('/Users/Frankie/Documents/Dissertation/Data/pancreatic/24hTargets.csv')
label[:5]
label = label[['FileName', 'OAC']]
label[:5]
joined_tables = label.join(data, on='FileName', how = 'outer')
joined_tables = joined_tables[pd.notnull(joined_tables['Probe1'])]
joined_tables['label'] = np.where(joined_tables['OAC']=='Mild', 0, 1)
joined_tables[:5]
joined_tables.drop(['FileName','OAC'], axis=1).to_csv("/Users/Frankie/Desktop/pancreatic.csv",index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The columns are the instances and rows the features so we need to transpose the dataset.
Step2: Read in the labels...
Step3: We are using the OAC labeling...
Step4: Join the data and labels on the FileName, remove any null rows and create a label column with 0s and 1s.
Step5: Drop the FileName and OAC columns and export as a CSV file.
|
1,347
|
<ASSISTANT_TASK:>
Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from netCDF4 import Dataset, num2date
from metpy.cbook import get_test_data
from metpy.interpolate import log_interpolate_1d
from metpy.plots import add_metpy_logo, add_timestamp
from metpy.units import units
data = Dataset(get_test_data('wrf_example.nc', False))
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
time = data.variables['time']
vtimes = num2date(time[:], time.units)
temperature = data.variables['temperature'][:] * units.celsius
pres = data.variables['pressure'][:] * units.pascal
hgt = data.variables['height'][:] * units.meter
plevs = [700.] * units.hPa
height, temp = log_interpolate_1d(plevs, pres, hgt, temperature, axis=1)
# Set up our projection
crs = ccrs.LambertConformal(central_longitude=-100.0, central_latitude=45.0)
# Set the forecast hour
FH = 1
# Create the figure and grid for subplots
fig = plt.figure(figsize=(17, 12))
add_metpy_logo(fig, 470, 320, size='large')
# Plot 700 hPa
ax = plt.subplot(111, projection=crs)
ax.add_feature(cfeature.COASTLINE.with_scale('50m'), linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Plot the heights
cs = ax.contour(lon, lat, height[FH, 0, :, :], transform=ccrs.PlateCarree(),
colors='k', linewidths=1.0, linestyles='solid')
ax.clabel(cs, fontsize=10, inline=1, inline_spacing=7,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour the temperature
cf = ax.contourf(lon, lat, temp[FH, 0, :, :], range(-20, 20, 1), cmap=plt.cm.RdBu_r,
transform=ccrs.PlateCarree())
cb = fig.colorbar(cf, orientation='horizontal', extend='max', aspect=65, shrink=0.5,
pad=0.05, extendrect='True')
cb.set_label('Celsius', size='x-large')
ax.set_extent([-106.5, -90.4, 34.5, 46.75], crs=ccrs.PlateCarree())
# Make the axis title
ax.set_title('{:.0f} hPa Heights (m) and Temperature (C)'.format(plevs[0].m), loc='center',
fontsize=10)
# Set the figure title
fig.suptitle('WRF-ARW Forecast VALID: {:s} UTC'.format(str(vtimes[FH])), fontsize=14)
add_timestamp(ax, vtimes[FH], y=0.02, high_contrast=True)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: Array of desired pressure levels
Step3: Interpolate The Data
Step4: Plotting the Data for 700 hPa.
|
1,348
|
<ASSISTANT_TASK:>
Python Code:
# Stdlib imports
from io import BytesIO
# Third-party libraries
from IPython.display import Image
from ipywidgets import interact, interactive, fixed
import matplotlib as mpl
from skimage import data, filters, io, img_as_float
import numpy as np
i = img_as_float(data.coffee())
i.shape
def arr2img(arr):
Display a 2- or 3-d numpy array as an image.
if arr.ndim == 2:
format, cmap = 'png', mpl.cm.gray
elif arr.ndim == 3:
format, cmap = 'jpg', None
else:
raise ValueError("Only 2- or 3-d arrays can be displayed as images.")
# Don't let matplotlib autoscale the color range so we can control overall luminosity
vmax = 255 if arr.dtype == 'uint8' else 1.0
with BytesIO() as buffer:
mpl.image.imsave(buffer, arr, format=format, cmap=cmap, vmin=0, vmax=vmax)
out = buffer.getvalue()
return Image(out)
arr2img(i)
def edit_image(image, sigma=0.1, R=1.0, G=1.0, B=1.0):
new_image = filters.gaussian(image, sigma=sigma, multichannel=True)
new_image[:,:,0] = R*new_image[:,:,0]
new_image[:,:,1] = G*new_image[:,:,1]
new_image[:,:,2] = B*new_image[:,:,2]
return arr2img(new_image)
edit_image(i, sigma=5, R=0.1)
lims = (0.0,1.0,0.01)
interact(edit_image, image=fixed(i), sigma=(0.0,10.0,0.1), R=lims, G=lims, B=lims);
def choose_img(name):
# Let's store the result in the global `img` that we can then use in our image editor below
global img
img = getattr(data, name)()
return arr2img(img)
# Skip 'load' and 'lena', two functions that don't actually return images
interact(choose_img, name=sorted(set(data.__all__)-{'lena', 'load'}));
lims = (0.0, 1.0, 0.01)
def edit_image(image, sigma, R, G, B):
new_image = filters.gaussian(image, sigma=sigma, multichannel=True)
if new_image.ndim == 3:
new_image[:,:,0] = R*new_image[:,:,0]
new_image[:,:,1] = G*new_image[:,:,1]
new_image[:,:,2] = B*new_image[:,:,2]
else:
new_image = G*new_image
return arr2img(new_image)
interact(edit_image, image=fixed(img), sigma=(0.0, 10.0, 0.1),
R=lims, G=lims, B=lims);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's load an image from scikit-image's collection, stored in the data module. These come back as regular numpy arrays
Step3: Let's make a little utility function for displaying Numpy arrays with the IPython display protocol
Step4: Now, let's create a simple "image editor" function, that allows us to blur the image or change its color balance
Step5: We can call this function manually and get a new image. For example, let's do a little blurring and remove all the red from the image
Step6: But it's a lot easier to explore what this function does by controlling each parameter interactively and getting immediate visual feedback. IPython's ipywidgets package lets us do that with a minimal amount of code
Step7: Browsing the scikit-image gallery, and editing grayscale and jpg images
Step8: And now, let's update our editor to cope correctly with grayscale and color images, since some images in the scikit-image collection are grayscale. For these, we ignore the red (R) and blue (B) channels, and treat 'G' as 'Grayscale'
|
1,349
|
<ASSISTANT_TASK:>
Python Code:
# Import required modules
import pandas as pd
from sklearn import preprocessing
# Set charts to view inline
%matplotlib inline
# Create an example dataframe with a column of unnormalized data
data = {'score': [234,24,14,27,-74,46,73,-18,59,160]}
df = pd.DataFrame(data)
df
# View the unnormalized data
df['score'].plot(kind='bar')
# Create x, where x the 'scores' column's values as floats
x = df['score'].values.astype(float)
# Create a minimum and maximum processor object
min_max_scaler = preprocessing.MinMaxScaler()
# Create an object to transform the data to fit minmax processor
x_scaled = min_max_scaler.fit_transform(x)
# Run the normalizer on the dataframe
df_normalized = pd.DataFrame(x_scaled)
# View the dataframe
df_normalized
# Plot the dataframe
df_normalized.plot(kind='bar')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Unnormalized Data
Step2: Normalize The Column
|
1,350
|
<ASSISTANT_TASK:>
Python Code:
from astromodels import *
# Create a point source named "pts1"
pts1 = PointSource('pts1',ra=125.23, dec=17.98, spectral_shape=powerlaw())
# Create the model
my_model = Model(pts1)
uniform_prior.info()
# Set 'lower_bound' to -10, 'upper bound' to 10, and leave the 'value' parameter
# to the default value
pts1.spectrum.main.powerlaw.K.prior = log_uniform_prior(lower_bound = 1e-15, upper_bound=1e-7)
# Display it
pts1.spectrum.main.powerlaw.K.display()
# Set 'lower_bound' to -10, 'upper bound' to 0, and leave the 'value' parameter
# to the default value
pts1.spectrum.main.powerlaw.index.prior = uniform_prior(lower_bound = -10, upper_bound=0)
pts1.spectrum.main.powerlaw.index.display()
# Create a short cut to avoid writing too much
po = pts1.spectrum.main.powerlaw
# Evaluate the prior in 2.3e-5
point = 2.3e-21
prior_value1 = po.K.prior(point * po.K.unit)
# Equivalently we can use the fast call with no units
prior_value2 = po.K.prior.fast_call(point)
assert prior_value1 == prior_value2
print("The prior for logK evaluate to %s in %s" % (prior_value1, point))
# You need matplotlib installed for this
import matplotlib.pyplot as plt
# This is for the IPython notebook
%matplotlib inline
# Let's get 500 points uniformly distributed between -20 and 20
random_points = np.logspace(-30,2,50)
plt.loglog(random_points,pts1.spectrum.main.powerlaw.K.prior.fast_call(random_points), '.' )
#plt.xscale("log")
#plt.ylim([-0.1,1.2])
plt.xlabel("value of K")
plt.ylabel("Prior")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's assign uniform priors to the parameters of the powerlaw function. The function uniform_prior is defined like this
Step2: We can use it as such
Step3: Now we can evaluate the prior simply as
Step4: Let's plot the value of the prior at some random locations
|
1,351
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-2', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
1,352
|
<ASSISTANT_TASK:>
Python Code:
from collections import Counter, defaultdict
from datetime import datetime
from sklearn.decomposition import PCA
import csv
import matplotlib.pyplot as plt
import numpy as np
import os.path
import pandas as pd
import re
import seaborn as sns; sns.set()
import time
import twitter
% matplotlib inline
plt.rcParams['figure.figsize'] = (10,7)
import warnings
warnings.filterwarnings("ignore")
#print twitter.__path__
import random
random.seed(1000)
# Pandas data-frame print formatting
from IPython.core.display import HTML
css = open('style-table.css').read() + open('style-notebook.css').read()
HTML('<style>{}</style>'.format(css))
self_screen_name = 'bala_io' # Self
# Keep appending data
fof_filename = "edges.csv" # 'Alters' and their Source->Sink Edges.
cache_filename = "cache.csv" # local cache of (TwitterId, FullName, UserName)
# One-time-use files
binaryMap_filename = "binaryMap.csv" # Directed Graph. Adjacencies as 0/1.RowFollowCol
cluster_filename = "results.csv"
# Twitter auth. https://dev.twitter.com/oauth/overview/application-owner-access-tokens
with open("../../passwd/credentials.txt", "r") as f:
reader = csv.reader(f )
login_dict = {line[0]: line[1]
for line in reader}
api = twitter.Api(consumer_key=login_dict.get('consumer_key') ,
consumer_secret=login_dict.get('consumer_secret'),
access_token_key=login_dict.get('access_token_key'),
access_token_secret=login_dict.get('access_token_secret'))
api
# 'Self' and Friends of Self
self_node = api.GetUser(screen_name = self_screen_name)
self_node_id = str(self_node.id) # Twitter Id of Self
friends_of_self = api.GetFriendIDs(user_id = self_node_id,
screen_name = self_screen_name ,
stringify_ids = True)
index = [self_node_id] + friends_of_self
# GetFriendIDs() API call is rate-limited at 15 req / 15 min
# https://dev.twitter.com/rest/public/rate-limiting
# For each of the list of nodes, fetch the list of nodes it follows, append to file
def update_FoF_File(fileName, to_fetch_list):
with open(fileName, 'a') as f:
apiReqCount = 0
for node in to_fetch_list:
friends_of_node = api.GetFriendIDs(user_id = node, stringify_ids = True)
row = ','.join([str(i) for i in [node] + friends_of_node ]) + "\n"
f.write(row)
apiReqCount += 1
if (apiReqCount == 15):
apiReqCount = 0
print("Off to Sleep :)")
time.sleep(15*60 + 10)
# parse FoF file and return list of nodes, for whom source->sink Edges are already there.
def getFinishList(fileName):
if not os.path.isfile(fileName):
return []
with open(fileName, 'r') as f:
return [ row.strip().split(',')[0] for row in f ] # 1st entry is a user
# Ego-network as adjacency-matrix
# Parses FoF file in order of index, create list of adjacencies as 0 | 1
# Writes to File. Adjacency-matrix in Row_follows_Column format
def updateBinaryMapFile(fof_filename, binaryMap_filename, index):
with open(fof_filename, "r") as f:
stripped_f = (line.replace('\r', '') for line in f)
reader = csv.reader(stripped_f)
fof_dict = {line[0]: line[1:] for line in reader
if line[0] in index} # dict of node:his_followers
if self_node_id not in fof_dict:
fof_dict[self_node_id] = index[1:] # dict of Self
bool_list = []
for user in index:
user_friends = set( fof_dict[user] )
bool_row = [item in user_friends for item in index] # for each, fill T/F
bool_list.append(bool_row)
int_nparray = np.array(bool_list) + 0 # Bool to int
binaryMap_rfc = pd.DataFrame(data = int_nparray, columns= index, index = index)
binaryMap_rfc.to_csv(binaryMap_filename)
# For list of Ids, fetch Profile details. If not in Offline file, make an API call
# Returns ['UserName', 'FullName', 'Followers_count', 'Friends_count', 'Location', 'Created_at']
# UsersLookup API 100 Ids/request
def lookup_in_cache(friendsIdsList):
cache, delta_cache = pd.DataFrame(), pd.DataFrame()
UserNameList, namesList = [], []
followers_count, friends_count = [], []
location, created_at = [], []
if os.path.isfile(cache_filename):
cache = pd.read_csv(cache_filename, skip_blank_lines=True,
dtype={'Ids':str, 'Friends_count':int, 'Followers_count':int})
cache.set_index('Ids', inplace=True)
to_fetch_list = list ( set (friendsIdsList) - set(cache.index) )
else :
to_fetch_list = friendsIdsList
i = 0
while (i < len(to_fetch_list) * 1./100):
print("... Cache-Miss for " + str(len(to_fetch_list)) + " nodes. Updating Cache...")
low, high = i * 100, min( len(to_fetch_list), (i+1)*100 ) # UsersLookup api
twitterObjectsList = api.UsersLookup(user_id = to_fetch_list[low:high])
temp = zip(*[( tempObject.screen_name, #ScreenName
tempObject.name, #Name
tempObject.followers_count, #Followers
tempObject.friends_count, #Friends
tempObject.location, #Location
tempObject.created_at #CreatedAt
) for tempObject in twitterObjectsList])
temp = list(temp)
UserNameList += list(temp[0])
namesList += list(temp[1])
followers_count += list(temp[2])
friends_count += list(temp[3])
location += list(temp[4])
created_at += list(temp[5])
i = i + 1
if len(to_fetch_list) > 0:
delta_cache = pd.DataFrame({'UserName':UserNameList,
'FullName':namesList,
'Ids': to_fetch_list,
'Followers':followers_count,
'Friends': friends_count,
'Location':location,
'Created':created_at})
delta_cache['Created'] = delta_cache['Created'].apply(lambda x:
datetime.strptime(
re.sub(r"\+[0-9]* ", "",x),'%c').
strftime("%b-%Y"))
delta_cache.set_index('Ids', inplace=True, drop = True)
cache = cache.append(delta_cache)
cache.to_csv(cache_filename)
return cache.loc[friendsIdsList]
# Display cluster-wise most-influential users, for the given clustering algo
def top_nodes_in_cluster(df, cluster_algo, n_clusters):
dummy_df = pd.DataFrame()
for i in range(n_clusters):
nodes_in_cluster = list( df [df[cluster_algo] == i ]['FullName'] )
if len(nodes_in_cluster) >= 10: # show only clusters of size > 10
col_name = str(i) + " : " + str(len(nodes_in_cluster)) + " Ids"
dummy_df[col_name] = nodes_in_cluster[:10]
return dummy_df
# identify 20 friends to follow after aggregating friends followed by top 50% in list
def discover_Friends_toFollow(ids_of_interest, friend_list, prop = .5, count = 20):
ids_of_interest = ids_of_interest[:int(len(ids_of_interest) * prop)]
if self_node_id in ids_of_interest:
ids_of_interest.remove(self_node_id)
print("'Who-To-Follow' reco after looking at %3d friends' friends:" %(len(ids_of_interest)))
with open(fof_filename) as f:
reader = csv.reader(f)
fof_dict = {row[0]:row[0:] for row in reader} # dict of node:her_followers
friendsToFollow = []
for id in ids_of_interest:
friendsToFollow += list (set(fof_dict[str(id)]) - set(friend_list) )
friendsToFollow = Counter(friendsToFollow).most_common(count)
tuples_list = list(zip(*friendsToFollow) )
topFriendsToFollowDF = pd.DataFrame()
topFriendsToFollowDF['Ids'] = list(tuples_list[0])
topFriendsToFollowDF['Freq'] = list(tuples_list[1])
topFriendsToFollowDF.set_index('Ids', drop = True, inplace = True)
index = topFriendsToFollowDF.index
topFriendsToFollowDF = topFriendsToFollowDF.merge(lookup_in_cache(index), copy = False,
left_index = True, right_index = True)
return topFriendsToFollowDF
# For the list of nodes I follow, fetch their friends-list
fof_finish_list = getFinishList(fof_filename ) # Completed nodes
fof_to_fetch_list = list ( set(friends_of_self) - set(fof_finish_list) ) # Pending nodes
print( str(len(fof_to_fetch_list)) + " out of " + str(len(index) - 1) +
" Friends details to be fetched")
# For the remaining nodes, populate their details in fof_file
update_FoF_File(fof_filename, fof_to_fetch_list)
# Build the adjacency matrix in terms of 0 and 1 (if there is an edge)
updateBinaryMapFile(fof_filename, binaryMap_filename, index)
# Read adj-matrix into df. Cell MxN is 1 iff node in Mth row follows node in Nth column
binaryMap_rfc = pd.read_csv(binaryMap_filename, skip_blank_lines=True, index_col = 0)
print(binaryMap_rfc.shape)
outlinks_count = binaryMap_rfc.sum(axis = 1) # horizontal-sum to count outlinks
inlinks_count = binaryMap_rfc.sum(axis = 0) # vertical-sum to count inlinks
# Histogram of number of OutLinks per node, within ego-network
sns.distplot(outlinks_count, bins = len(index)//10, kde=False);
# Histogram of number of InLinks per node, within ego-network
sns.distplot(inlinks_count, bins = len(index)//10, kde=False);
binaryMap_cfr = binaryMap_rfc.transpose() # column-values: Outlinks
binaryMap_cfr_norm = binaryMap_cfr / binaryMap_cfr.sum(axis = 0)
colStochMatrix = np.matrix( binaryMap_cfr_norm.fillna(0)) # column-stochastic-matrix
pageRankVector = np.matrix([1.0/len(index)] * len(index)) # iniitialize page-rank-vector
pageRankVector = pageRankVector.transpose() # transpose to column-vector
# PageRank algo: Power Iteration to solve Markov transition matrix
# refer this : http://setosa.io/blog/2014/07/26/markov-chains/index.html
beta = 0.85
epsilon = 999
iteration = 0
while epsilon > (1.0/(10**16)):
pageRankVectorUpdating = colStochMatrix * pageRankVector * beta
# re-insert leaked page-ranks
S = np.array(pageRankVectorUpdating).sum()
pageRankVectorUpdated = pageRankVectorUpdating + (
1 - S) * (1.0/len(index)) * np.ones_like(len(index))
# compute the squared-difference and check for convergence
error = np.array(pageRankVectorUpdated - pageRankVector)
epsilon = np.sqrt((error * error).sum())
iteration = iteration + 1
pageRankVector = pageRankVectorUpdated
print( "Sum of Page-Rank Scores: " + str(pageRankVector.sum()) +
"\nConverged in " + str(iteration) + " iterations")
# Collect the results
results_df = pd.DataFrame()
results_df['Ids'], results_df['PageRank'] = index, pageRankVector
results_df['Inlinks'], results_df['Outlinks'] = list(inlinks_count), list(outlinks_count)
results_df = results_df.set_index('Ids', drop = True )
results_df = results_df.merge(lookup_in_cache(index), copy = False,
left_index = True, right_index = True)
results_df = results_df[['PageRank','UserName', 'FullName', 'Inlinks' , 'Outlinks',
'Followers','Friends', 'Location', 'Created' ]]
results_df.fillna('').sort_values(by = 'PageRank', ascending =False).set_index('FullName').head(10)
dummy_df = pd.DataFrame()
temp_df = results_df.sort_values( by = 'PageRank', ascending =False)
for i in range(10):
dummy_df[i] = list (temp_df [10*i : 10* i + 10]['FullName'])
dummy_df
pageRank_to_plot = len(index) * results_df["PageRank"]
sns.distplot(pageRank_to_plot, kde=False, rug=True, bins = len(index)//10);
sns.jointplot(x="Inlinks", y="Outlinks", data=results_df, kind = "kde");
n_clusters = min( int( round(np.sqrt(len(index)/2)) ), 10 ) # not more than 10 clusters
print(n_clusters)
from sklearn.cluster import KMeans
est = KMeans(max_iter = 100000, n_clusters = n_clusters, n_init = 200, init='k-means++')
results_df['kmeans'] = est.fit_predict(binaryMap_cfr)
top_nodes_in_cluster(results_df.sort_values( by = 'PageRank', ascending =False),
'kmeans', n_clusters)
from sklearn import cluster
spectral = cluster.SpectralClustering(n_clusters=n_clusters, n_init = 500,
eigen_solver='arpack',
affinity="nearest_neighbors")
spectral.fit(binaryMap_cfr)
results_df['spectral'] = spectral.labels_.astype(np.int)
top_nodes_in_cluster(results_df.sort_values( by = 'PageRank', ascending =False),
'spectral', n_clusters)
results_df [results_df['spectral'].isin([1])].sort_values(by = 'PageRank', ascending =False).set_index('FullName').head()
from sklearn.cluster import AffinityPropagation
af = AffinityPropagation(preference=-50).fit(binaryMap_cfr)
results_df['affinity'] = af.labels_
n_clusters_affinity = len(af.cluster_centers_indices_)
print(str(n_clusters_affinity) + " affinity clusters.")
top_nodes_in_cluster(results_df.sort_values( by = 'PageRank', ascending =False),
'affinity', n_clusters_affinity)
pca = PCA(n_components=3)
Xproj = pca.fit_transform(binaryMap_cfr)
results_df['dim1'] = Xproj[:,0]
results_df['dim2'] = Xproj[:,1]
results_df['dim3'] = Xproj[:,2]
results_df = results_df.sort_values( by = 'PageRank', ascending =False)
results_df.to_csv(cluster_filename)
print("Explained-variance and Proportion of Explained-variance in 3 dimensions [dim1 dim2 dim3]")
print(pca.explained_variance_, pca.explained_variance_ratio_)
# Spectral clustering | Plot the ego-network in 2 dimensions
plt.figure(num=None, figsize=(20, 10), dpi=80, facecolor='w', edgecolor='k')
plt.scatter(results_df['dim1'], results_df['dim2'], s = 100 ,c= results_df['spectral'],
alpha=0.5, cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.colorbar();
discover_Friends_toFollow(ids_of_interest = index,
friend_list = index,
prop = .5 , count = 20).fillna('').set_index('FullName')
favorite_cluster_df = results_df [results_df['spectral'].isin([0,2,5,8])]
favorite_cluster_list = list(favorite_cluster_df.index)
discover_Friends_toFollow(ids_of_interest = favorite_cluster_list,
friend_list = index,
prop = .5, count = 30).fillna('').set_index('FullName')
discover_Friends_toFollow(ids_of_interest = list(results_df [results_df['spectral'] == 9].index),
friend_list = index,
prop = 1, count = 20).fillna('').set_index('FullName')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Identify influential friends using 'Page Rank' formulation
Step2: Initialize PageRank vector, such that all the nodes have equal PageRank score adding upto 1.
Step3: On applying the above Transition-matrix transformation iteratively on the PageRank vector, the vector will eventully converge such that
Step4: 2a. Nodes with high PageRank scores
Step5: 2b. Top 100 influential-nodes in Ego-Network
Step6: 2c. Histogram of PageRank scores
Step7: A joint-plot showing how Inlinks and Outlinks of the nodes are distributed (within the ego-network)
Step8: 3. Identify implicit clusters using Clustering algos
Step9: 3a. K-Means clustering
Step10: 3b. Spectral clustering
Step11: Nodes mainly into Deep Learning community have grouped into Cluster 8, Python Machine Learning community into Cluster 2, Design community into Cluster 9, general Data Science community into Cluster 0 and 5
Step12: 3c. Affinity Propagation clustering
Step13: 3d. Principal Component Analysis
Step14: A simpler visualization of Spectral-clustering outcome as rendered in 2 dimensions.
Step15: More on Clustering Algorithms
Step16: Recommendations on the basis of a specific cluster outcome.
Step17: 4c.Reco
|
1,353
|
<ASSISTANT_TASK:>
Python Code:
rootdir = "C:\\Users\\Shantnu\\Desktop\\Data Sources\\Enron Spam"
# Loop through all the directories, sub directories and files in the above folder, and print them.
# For files, print number of files.
for directories, subdirs, files in os.walk(rootdir):
print(directories, subdirs, len(files))
print(os.path.split("C:\\Users\\Shantnu\\Desktop\\Data Sources\\Enron Spam\\enron1\\ham"))
print(os.path.split("C:\\Users\\Shantnu\\Desktop\\Data Sources\\Enron Spam\\enron1\\ham")[0])
print(os.path.split("C:\\Users\\Shantnu\\Desktop\\Data Sources\\Enron Spam\\enron1\\ham")[1])
# Same as before, but only print the ham and spam folders
for directories, subdirs, files in os.walk(rootdir):
if (os.path.split(directories)[1] == 'ham'):
print(directories, subdirs, len(files))
if (os.path.split(directories)[1] == 'spam'):
print(directories, subdirs, len(files))
ham_list = []
spam_list = []
# Same as before, but this time, read the files, and append them to the ham and spam list
for directories, subdirs, files in os.walk(rootdir):
if (os.path.split(directories)[1] == 'ham'):
for filename in files:
with open(os.path.join(directories, filename), encoding="latin-1") as f:
data = f.read()
ham_list.append(data)
if (os.path.split(directories)[1] == 'spam'):
for filename in files:
with open(os.path.join(directories, filename), encoding="latin-1") as f:
data = f.read()
spam_list.append(data)
print(ham_list[0])
print(spam_list[0])
# Write a function , that when passed in words, will return a dictionary of the form
# {Word1: True, Word2: True, Words3: True}
# Removing stop words is optional
def create_word_features(words):
my_dict = dict( [ (word, True) for word in words] )
return my_dict
create_word_features(["the", "quick", "brown", "quick", "a", "fox"])
ham_list = []
spam_list = []
# Same as before, but this time:
# 1. Break the sentences into words using word_tokenize
# 2. Use the create_word_features() function you just wrote
for directories, subdirs, files in os.walk(rootdir):
if (os.path.split(directories)[1] == 'ham'):
for filename in files:
with open(os.path.join(directories, filename), encoding="latin-1") as f:
data = f.read()
# The data we read is one big string. We need to break it into words.
words = word_tokenize(data)
ham_list.append((create_word_features(words), "ham"))
if (os.path.split(directories)[1] == 'spam'):
for filename in files:
with open(os.path.join(directories, filename), encoding="latin-1") as f:
data = f.read()
# The data we read is one big string. We need to break it into words.
words = word_tokenize(data)
spam_list.append((create_word_features(words), "spam"))
print(ham_list[0])
print(spam_list[0])
combined_list = ham_list + spam_list
print(len(combined_list))
random.shuffle(combined_list)
# Create a test and train section.
# 70% of the data is training. 30% is test
training_part = int(len(combined_list) * .7)
print(len(combined_list))
training_set = combined_list[:training_part]
test_set = combined_list[training_part:]
print (len(training_set))
print (len(test_set))
# Create the Naive Bayes filter
classifier = NaiveBayesClassifier.train(training_set)
# Find the accuracy, using the test data
accuracy = nltk.classify.util.accuracy(classifier, test_set)
print("Accuracy is: ", accuracy * 100)
classifier.show_most_informative_features(20)
# Clasify the below as spam or ham
# Hint: 1. Break into words using word_tokenzise
# 2. create_word_features
# 3. Use the classify function
msg1 = '''Hello th̓ere seُx master :-)
i need c0ck ri͏ght noِw ..͏. don't tell my hǔbbٚy.ٚ. ))
My sc͕rٞeٚe̻nname is Dorry.
My accֺo֔unt is h֯ere: http:nxusxbnd.GirlsBadoo.ru
C u late٘r!'''
msg2 = '''As one of our top customers we are providing 10% OFF the total of your next used book purchase from www.letthestoriesliveon.com. Please use the promotional code, TOPTENOFF at checkout. Limited to 1 use per customer. All books have free shipping within the contiguous 48 United States and there is no minimum purchase.
We have millions of used books in stock that are up to 90% off MRSP and add tens of thousands of new items every day. Don’t forget to check back frequently for new arrivals.'''
msg3 = '''To start off, I have a 6 new videos + transcripts in the members section. In it, we analyse the Enron email dataset, half a million files, spread over 2.5GB. It's about 1.5 hours of video.
I have also created a Conda environment for running the code (both free and member lessons). This is to ensure everyone is running the same version of libraries, preventing the Works on my machine problems. If you get a second, do you mind trying it here?'''
words = word_tokenize(msg1)
features = create_word_features(words)
print("Message 1 is :" ,classifier.classify(features))
words = word_tokenize(msg2)
features = create_word_features(words)
print("Message 2 is :" ,classifier.classify(features))
words = word_tokenize(msg3)
features = create_word_features(words)
print("Message 3 is :" ,classifier.classify(features))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python 3 Encodings
|
1,354
|
<ASSISTANT_TASK:>
Python Code:
# Imports the functionality that we need to display YouTube videos in a Jupyter Notebook.
# You need to run this cell before you run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
# Display a specific YouTube video, with a given width and height.
# WE STRONGLY RECOMMEND that you can watch the video in full-screen mode
# (much higher resolution) by clicking the little box in the bottom-right
# corner of the video.
YouTubeVideo("chBLLNBGoEE",width=640,height=360) # modules and pyplot
# ensures that the plots made by matplotlib/pyplot show up in the notebook!
%matplotlib inline
# imports the pyplot module from matplotlib
import matplotlib.pyplot as plt
x1 = [2,4,6,8,10,12,14,16,18]
y1 = [10,8.25,7.5,7,6.5,7,7.5,8.25,10]
x2 = [5, 15]
y2 = [15, 15]
# put your plotting commands here!
# Display a specific YouTube video, with a given width and height.
# WE STRONGLY RECOMMEND that you can watch the video in full-screen mode
# (much higher resolution) by clicking the little box in the bottom-right
# corner of the video.
YouTubeVideo("BTXyE3KLIOs",width=640,height=360) # numpy
# put your code here!
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Useful references
Step2: Useful references
|
1,355
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
% matplotlib inline
def ErrorPlot( wavenumber,windowLength ):
data = np.fromfunction( lambda x: np.sin((x-windowLength / 2)/128 * 2 * np.pi * waveNumber), (128 + windowLength / 2, ) ) #creating an array with a sine wave
datafiltered = medianFilter(data, windowLength) #calculate the filtered wave with the medianFiltered function
data = data[ windowLength / 2 : - windowLength ] # slice the data array to synchronize both waves
datafiltered = datafiltered[ : len(data) ] # cut the filtered wave to the same length as the data wave
error = ErrorRate(data,datafiltered,windowLength,wavenumber) #calculate the error with the ErrorRate function
plt.axis([0, y + 1, 0, 1.2])
plt.xlabel('Wave number', fontsize = 20)
plt.ylabel('Error rate', fontsize = 20)
plt.scatter(*error)
def ErrorRate(data,datafiltered,windowLength, wavenumber):
errorrate = data-datafiltered #calculate the difference between the sine wave and the filtered wave
error = [] #creating a list and save the error rate with the matching wavenumber in it
errorrate = np.abs(errorrate)
error.append([wavenumber ,np.mean(errorrate)])# fill the list with the errorrate and corresponding wave number
error = zip(*error) #zip the error ([1,1],[2,2],[3,3]) = ([1,2,3],[1,2,3])
return error
def medianFilter( data, windowLength ):
if (windowLength < len(data)and data.ndim == 1):
tempret = np.zeros(len(data)-windowLength+1) # creating an array where the filtered values will be saved in
if windowLength % 2 ==0: # check if the window length is odd or even because with even window length we get an unsynchrone filtered wave
for c in range(0, len(tempret)):
tempret[c] = np.median( data[ c : c + windowLength +1 ] ) # write the values of the median filtered wave in tempret, calculate the median of all values in the window
return tempret
else:
for c in range(0, len(tempret)):
tempret[c] = np.median( data[ c : c + windowLength ] )
return tempret
else:
raise ValueError("windowLength must be smaller than len(data) and data must be a 1D array")
fig = plt.figure()
for y in range (0,40):
ErrorPlot(y,5)
pp = PdfPages( 'Error of the median filtered sine waves with different wave numbers.pdf')
pp.savefig(fig)
pp.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I plot the error of the filtered wave. I use the absulte values of the difference between sine wave and median filtered wave and calculate the mean, to get the error. I use a window length of 5 and different sine wave numbers.
Step2: Plot
Step3: With a higher wave number the error rate continues to rise. The error rate at wave number 16 and 32 is unexpected.
|
1,356
|
<ASSISTANT_TASK:>
Python Code:
for i in range(1, 10, 2):
print i
for i in range (5, 1, -1):
print i
count = 0
while (count < 5):
print count
count = count + 1
total = 0
for i in range(4):
total = total + i
print total
name = "Mits"
for i in name:
print i
name = "Wilfred"
newName = ""
for letter in name:
newName = newName + letter
print newName
name = "Wilfred"
newName = ""
for letter in name:
newName = letter + newName
print newName
seq = "AGCTGATGC"
count = 0
for letter in seq:
count = count + 1
print count
seq = "AGCTGATGC"
count = 0
for letter in seq:
if letter == "T":
count = count + 1
print count
count = 0
while count < 5:
print count
print "Done"
count = 0
while count > 0:
print count
print "Done"
count = 0
while count < 10:
print count
count = count + 1
print "Done"
count = 10
while count > 0:
count = count - 1
print "Done"
a = True
count = 0
while a:
count = count + 1
print "Done"
x = 1
while x != 100:
x = x + 5
print "Done"
x = 1
while x <= 100:
x = x + 5
print "Done"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Your guess
Step2: Your guess
Step3: Your guess
Step4: Your guess
Step5: Your guess
Step6: Your guess
Step7: Your guess
Step8: Your guess
Step9: Your guess
Step10: Endless loop or not? Your guess
Step11: Endless loop or not? Your guess
Step12: Endless loop or not? Your guess
Step13: Endless loop or not? Your guess
Step14: Endless loop or not? Your guess
Step15: Endless loop or not? Your guess
|
1,357
|
<ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
import seaborn as sns
sns.set_style("white")
import util
df = util.load_burritos()
N = df.shape[0]
# Define predictors of the model
m_lm = ['Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Wrap']
# Remove incomplete data
dffull = df[np.hstack((m_lm,'overall'))].dropna()
X = sm.add_constant(dffull[m_lm])
y = dffull['overall']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
print(1 - np.var(res.resid_pearson) / np.var(y))
# Visualize coefficients
sns.set_style("whitegrid")
from tools.plt import bar
newidx = np.argsort(-res.params.values)
temp = np.arange(len(newidx))
newidx = np.delete(newidx,temp[newidx==0])
plt.figure(figsize=(15,7))
plt.bar(np.arange(len(newidx)), res.params[newidx].values, color='.5',yerr=res.bse[newidx].values)
plt.xticks(np.arange(len(newidx)),res.bse[newidx].keys())
ax = plt.gca()
ax.set_ylim((-.5,.5))
ax.set_yticks(np.arange(-.5,.6,.1))
ax.set_xticks([])
figname = 'overall_metric_linearmodelcoef'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
# Get all ingredient keys
startingredients = 29
ingredientkeys = df.keys()[startingredients:]
# Get all ingredient keys with at least 10 burritos
Nlim = 10
ingredientkeys = ingredientkeys[df.count()[startingredients:].values>=Nlim]
# Make a dataframe for all ingredients
dfing = df[ingredientkeys]
# Convert data to binary
for k in dfing.keys():
dfing[k] = dfing[k].map({'x':1,'X':1,1:1})
dfing[k] = dfing[k].fillna(0)
# Run a general linear model to predict overall burrito rating from ingredients
X = sm.add_constant(dfing)
y = df.overall
lm = sm.GLM(y,X)
res = lm.fit()
print(res.summary())
origR2 = 1 - np.var(res.resid_pearson) / np.var(y)
# Test if the variance explained in this linear model is significantly better than chance
np.random.seed(0)
Nsurr = 1000
randr2 = np.zeros(Nsurr)
for n in range(Nsurr):
Xrand = np.random.rand(X.shape[0],X.shape[1])
Xrand[:,0] = np.ones(X.shape[0])
lm = sm.GLM(y,Xrand)
res = lm.fit()
randr2[n] = 1 - np.var(res.resid_pearson) / np.var(y)
print 'p = ' , np.mean(randr2>origR2)
# Average each metric over each Location
# Avoid case issues; in the future should avoid article issues
df.Location = df.Location.str.lower()
m_Location = ['Location','N','Yelp','Google','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
tacoshops = df.Location.unique()
TS = len(tacoshops)
dfmean = pd.DataFrame(np.nan, index=range(TS), columns=m_Location)
for ts in range(TS):
dfmean.loc[ts] = df.loc[df.Location==tacoshops[ts]].mean()
dfmean['N'][ts] = sum(df.Location == tacoshops[ts])
dfmean.Location = tacoshops
# Note high correlations between features
m_Yelp = ['Google','Yelp','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
M = len(m_Yelp)
dfmeancorr = dfmean[m_Yelp].corr()
from matplotlib import cm
clim1 = (-1,1)
plt.figure(figsize=(10,10))
cax = plt.pcolor(range(M+1), range(M+1), dfmeancorr, cmap=cm.bwr)
cbar = plt.colorbar(cax, ticks=(-1,-.5,0,.5,1))
cbar.ax.set_ylabel('Pearson correlation (r)', size=30)
plt.clim(clim1)
cbar.ax.set_yticklabels((-1,-.5,0,.5,1),size=20)
ax = plt.gca()
ax.set_yticks(np.arange(M)+.5)
ax.set_yticklabels(m_Yelp,size=25)
ax.set_xticks(np.arange(M)+.5)
ax.set_xticklabels(m_Yelp,size=25)
plt.xticks(rotation='vertical')
plt.xlim((0,M))
plt.ylim((0,M))
plt.tight_layout()
# GLM for Yelp: all dimensions
m_Yelp = ['Hunger','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
dffull = dfmean[np.hstack((m_Yelp,'Yelp'))].dropna()
X = sm.add_constant(dffull[m_Yelp])
y = dffull['Yelp']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
print(res.pvalues)
print 1 - np.var(res.resid_pearson) / np.var(y)
# GLM for Yelp: some dimensions
m_Yelp = ['Tortilla','overall']
dffull = dfmean[np.hstack((m_Yelp,'Yelp'))].dropna()
X = sm.add_constant(dffull[m_Yelp])
y = dffull['Yelp']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
plt.figure(figsize=(4,4))
ax = plt.gca()
dfmean.plot(kind='scatter',x='Tortilla',y='Yelp',ax=ax,**{'s':40,'color':'k','alpha':.3})
plt.xlabel('Average Tortilla rating',size=20)
plt.ylabel('Yelp rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
plt.ylim((2,5))
plt.tight_layout()
print sp.stats.spearmanr(dffull.Yelp,dffull.Tortilla)
figname = 'corr-Yelp-tortilla'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Linear model 1
Step3: Linear model 2
Step4: Linear model 3. Predicting Yelp ratings
|
1,358
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-3', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
1,359
|
<ASSISTANT_TASK:>
Python Code:
# ! pip install --user graphistry
import graphistry
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
graphistry.__version__
import datetime, pandas as pd
e_df = pd.DataFrame({
's': ['a', 'b', 'c', 'a', 'b', 'c', 'a', 'd', 'e'],
'd': ['b', 'c', 'a', 'b', 'c', 'a', 'c', 'e', 'd'],
'time': [datetime.datetime(1987, 10, 1), datetime.datetime(1987, 10, 2), datetime.datetime(1987, 10, 3),
datetime.datetime(1988, 10, 1), datetime.datetime(1988, 10, 2), datetime.datetime(1988, 10, 3),
datetime.datetime(1989, 10, 1), datetime.datetime(1989, 10, 2), datetime.datetime(1989, 10, 3)]
})
n_df = pd.DataFrame({
'n': ['a', 'b', 'c', 'd', 'e'],
'score': [ 1, 30, 50, 70, 90 ],
'palette_color_int32': pd.Series(
[0, 1, 2, 3, 4],
dtype='int32'),
'hex_color_int64': pd.Series(
[0xFF000000, 0xFFFF0000, 0xFFFFFF00, 0x00FF0000, 0x0000FF00],
dtype='int64'),
'type': ['mac', 'macbook', 'mac', 'macbook', 'sheep'],
'assorted': ['Canada', 'mac', 'macbook', 'embedded_smile', 'external_logo'],
'origin': ['Canada', 'England', 'Russia', 'Mexico', 'China']
})
g = graphistry.edges(e_df, 's', 'd').nodes(n_df, 'n')
g.encode_point_icon(
'assorted',
shape="circle", #clip excess
categorical_mapping={
'macbook': 'laptop', #https://fontawesome.com/v4.7.0/icons/
'Canada': 'flag-icon-ca', #ISO3611-Alpha-2: https://github.com/datasets/country-codes/blob/master/data/country-codes.csv
'embedded_smile': 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAARkAAAC0CAMAAACXO6ihAAAAgVBMVEUAAAD////T09P4+PiYmJihoaHf39/7+/vm5uYLCwsRERHy8vJ+fn7v7+83Nze2trbZ2dlRUVFGRka9vb1xcXEYGBhbW1tWVlbq6uq7u7usrKxgYGA8PDwWFhbNzc14eHhpaWmoqKiPj48vLy8oKCggICCIiIiUlJRBQUFKSkokJCTGfDo0AAAOxklEQVR4nO1d6YKyOgxFNkFEUFwQFFEUdd7/AS9L06ZlkZm5I37i+acUSA9dkjRNJal/JPNYjfxRBlvW1guzb3leBPupJo8wLGfat0wvgaOrjEQo0b1vsfrGTKvSUiLtW7RekaRNvGRQV32L1x9it5mX0Wh86lu+3uCM24jJEPYtYT+468Kg6xuGz3cuu28Ze0FqYA7c4/Wwud3WUy3ihpq+pewBB9yTnPWMXjivfURYjxL2hIDV3j+KFx3Wwwan8TFixl5VpdtvswuWHEXGKOhBuD4xZS0mntRcPxvRYT1Lkvt0OSwLiql38qy+xP65Ar0K5nRWCgas5dZgQvWYtkEkdDzPG5gSTGeeoHkMmRDzWx+SFhx3aDGhRbXg+GmC9Y0E+pKbNJa5IfXYf6Js/eIEVW4mRuIMhKH0pztM2MvmMjtMzEh+nnC9QiP1jb6ayzgcM0PpTsSOVA4tZfQhMgNmQWsX4dwTI/1ZsvUL8G7GbYV8jplhuGjupNJKayl+7en8JNn6hUpq67WWWmJitk8SrWfArPNAR0Hdybg9R7KecQYVbtFebsuI2TxHsr5xh1mnvSEwTc9qUZTfCiHRZqwGf1WJxB7YvCQhM7utzczQ1BQ/SbDesezCDLYN5k8TrWdQZprNyT0mRm/tde8E2psaF9hmnJbX4vR7M2zo2NrQnea8MTmYARi1iKj2uhBMMx7QAiVbmnSqHSXmXHkZjAEFpF1ZtR3hUlgNpqlvWO+JBLle3DW6EMqWyMuwmOEdmXYabubzTZjy3ciFYXg4A3AOMWBRqUQwOmfCjPLA7nwzpNVOwyOV9p3cW+8HtZWXKFN0DqQZGX2L+mRMtGZe3CI4GqZ2rW9Rn45rIy+lN6aj4+8dsa6Lj9bjS3k1AUV5MOYkRhjhNaWxLmtsyfLuyZHjOMdTXZjaAHBPtSCSM3ialg5ren4M85xk+FY82uoTvVaDS1z4MOy2FfE3xuoWp6qqaiVU7TQNb+V4k1ATQhvcALRfOrJsWLzJrdiGHMmOGiKf1qDiGs93zX+0pYfCH4zjXIqvxmM+EIYy1MRRjW+mFcOwG9Z6y07KBoj+wHfEWvT9ftpMgUnw3X5U4u3XFMSNlARjX/e2GVyjibe4b8n/GBVHja9H14Wgxu3n6dbV+SQA2/deutxvBVq2aksU0QkpO9Z7O272HsdLdHigvSGXznuPMgmek6wILXObl2S2u1VtatAFlfh5UvaAL0QMzhoyWV4D2bdqx1g1shU7Ct7c04e6ko5jaWZkQcWu61tmuAzfPTgYbdbmd1LeYdhp2a/xzqBbnEa2EHwFoUeVve3DwJkGQlfmX2hMw1rRpmCjb0WBIZxZLdvC3hg0WK9mmCVX3PdWchuwAp1NqTaMBTDTg1z9YwomUI0rAaIBhuB/qYCaS1bNRVi3fW/DqAFzaDJx9doKvBLPFuolcABdrmYXytJqbk7vD+gwdRrLEfTip0v1CiCVrwv1ncGsNZzlJASTVL5uVzIoOv7u6WK9AEJS+7otpmCAD9NogtDOmqEE9gi2bv9/XwTN7QLMKXlwoQ4FnMY2swQf+EDtbND/K1sK6MQ0TGWG7RfURbcl3ZAwqOAYhITUfywkjKZB98ZgI/GAAZdbA6AG+JsvJrWBLmVvGTUTlRIzSP9DCZax04XmsWZrLNaAI4MXLL5BMZw4uasyS3YwTMcMoG0Xz7Dz+M/qg2by+Sq49C1cv9g0MeMMnBghWRNrMQOeligOdpUYf7iKDMbGE4lxBpK16SH2sY7Cy8bRepCLkg0w1Uh2DUOPInWgScfbcFkshpLi64MPPvjggw8++OCDDz744IMPPvjggw8++OCDDz744IM/wWTF0FrwHERe/ByZXgNbxQa07vc6Fou90ZunPsDAqVZaikGgljycEJtuzFxo7OdwNnJ3Y+ZOywzlQFMuqUgLMwdaZiCHU2ZII5ZmsLnUENuMeTlPrYfMnCkzncaZxNd14w12dpgddmlD0KPcab9Gse/lDbIRdmHGTIssI1634HtlQMxI0nyapnE3beY4GhQz3bH2/4iZr83ykGE5b0m2tciKLDeVeMxbXNwZtpwceAmLW/EmUp6ZW/72uGP0XhIWL7zj0WdGdqq2MbMp7ur2DoDmRb4yHo8VxZA9Jy7/3J1UQP7PMtKzIooRBZiCdeDp9ji/1dY9TyV1ozeq+W6LtRPZ5a0aqztm5hS5+dtt2UPxwXP2EBX3p+lWtgtZ9chLIT50Cfsxo2txg5Y9aEZv13KJr5FR3nXMnhayZ5Paxuwfuj/6HAjB3YpV2HAHtvPGkxKPRfJaNF717gtZfi3jlL13z8rakrlFZey4yswdJVpU2L5lljAMn5sR2viN4zJboWFXkg1P8Z6qUcrtCIlMrIST8H6UuA30SvQA9MopOxwng7fBebEV0ozPlZjwHFnd9uyXfRMyasOWHcrMkk/tGpktzHzVvPEq3WqkCHlmQvzxZU4JJ8ygw7fK04C+6s/IUXhmZG4PDmHmLFduy6FxzCjiYSqQzYkyI+5G0JqZ2dclXD6yFFoIc44Zh/s83ZhpODxIYEZIuV8yU3eAzEhkpgr9xjNTefWlkZna87GOtbvKFhwzfGfrxAzK/qfngJx3OTPLmr0l5cWCGfb9jOJWlMhqX+n2mNpTlRnbMFDXbWQGfupemgYukfUo1bRda4cNUgEyx0N1nMmZgaxkI8Ur9zLeSH1zZmZTVcPtwgucUpiCGdqijFMh9+VIap/3hvSqoa/iBupJY0T6e4EZVwtXyXIL7BHz8a4GHrxjxnGllTNAaBBmrk4QwOPdoEA+myWnq4Y+rr0NiBmbvWCtasAnYSZk/+TMgPhjmpVBZcwUwzP9lO5yBc05Z+YMnPmgIEBRMk7MqEhBkVjmwtrAjGcmIolnoHI6NQZWEb4DCthwOSDM5ICxRtBnWEeL5hM4d7ikHjbpsa1nS3YeGx2xWSYLjWdmDhOHXogfUmbW8MYY7hSYgWQ8LBXGkfuHMqNDGtMV+QOlYoQvxzFDk0tgZsD4FJiZU2L2rHZyMf1Bf2HMEDU6ZwYYRcf5NTFTDg4kw86KfWD2XIGZCSnAdvXPYYLQOWbYzlvSndC5txwzE+jpcLVoAYrahRmSKrQrMzTJHXpcAzNEmjg/C0bPVY4xX66ZGZSmCOppccyw63I3ZmgrXJ7SdLrcdWHGK4emoJC/bGVtzNygM6Gd5A3MkL69uu1myfnC3G0oOchjZo7kL9+sZ8ZrZ4bNQV41JdYDZkj1z4X8q4fMwG0+8n80MCOm1AQ1CLkhHzMD05ny9TNm0HQnT2OenW7MYLQxA6OzjFa6OjJTHWY6MLODrjn/GTP8vmY3wPt2/19mYKbDSa47MgMNGyUueMzMhPReZfMzZiTh8AwF7Wl+OWbQCNWBGTjZ9qfMzCrnihggwKswQ6eJ7zFj/JIZaVMxtuzwT5lBvrS/ZEaC3nT7KTPSLhUPT9P/ghkYgW3kAezGjPkjZjaQ1HX/Y2YyxWGq85au8wfMUNUZHSB/7MQM9UCgxImPmYHvXShHP2VGyt2w24iZ5tHld8ywI35iG5jZgXBo+oOD0Tsyg5IqPmYGTP8mHbgrMxnOMVVuSgfqz5mJqLIKpzkb0oQTtcSoGzNgV3xH07tAF4x+xswsiFzbkol/3gT3y/jwO2Ys8MpT+Qx05DWsBl9gNn7EDPjLdDatpcg/U8sM9STFP2NmWrzAoCsXnE71fWZuY3y/hNybGTN3egZQWnS2TQSlHzFzgQ/GVD3hTPUKM3TUHp1/yEzRKHXKDLxwipkRcqa2MJNQD1Ph9dgwp17uuaLCjg1HDVC6MmAmVOqZAVt75MKMP0fezlpmqBOp/KcmL3uVGei0CWKG2ktwgE+MmZH5lUT4uy4zEm0jlh5F1M9LmBH1bYGZVQKi6bNFZqSix1L3s15M+UlIbzUWixliZuQu57fkNqcj5jiviXmmaQazJxcN9rIA/4x6u+XzjXmhp96Gi90KBrKg/BYJ+AijYuWTvj/YTcxZKBf0r2jCVG+XyS+sjx+azpUVPOQ8Cr9a6jE3sG3rMjb+Z+z8mKPKXKgF/PwYkAn6w5DRqoa3ynh12A3ZkzMLc+NQp29+HPA1q6vj0TZs2caJDvGuli87UtFKfyTVQEa+4+TX8sMdPSrk2PZdWWg4l8oajWIxZqRjPTN+puKY4rKJhXOoNyzHlJjwzIgPMYUFtWVlwcRB00MJD/mkMZSyCZ3FaqbUe0ph88xIc6HR2Gn5KUpmpBR1sNHYIaNBwYy4WsGdFSPWjlvUMRuZGRceCIf/s5YZoTlvG5gBLVxcdHKllVjeF0eaBVfCvUkcM9LX1iXc+d4SmoIfPmKm0hhdJFsjM+Vw+UNmasYFdkbWTmjhVmw+ZEa60cVLK8o68IpnJnvoIb2q6ikNWSepbTMWfyLBmavfdodWvpp6k1vaIaZwKGUXZjzpXl3UjpBds+HFle8TkUmxNxV3pUXERxrn4/gejzMiyKxu5A7QwJU5iO5Xc+HB4qMem9KcFsQjcHSkAhpLmB007slupoXG/D/XPPCD+ydTnFaLqz5mDUGJQi6J2OVAFzrHbnyRJrz0svvwEA6y6kxUlFVyx1YJId7tmrjsNj0etWVN9jekz2w0x3GC6f+TNvqyWJ6O2TsPdYFM5iF7VXDqnDv2Mptj0cnUn1kv5tVxPJmLFCUf+PcBuHX+mZfCwgmcyPXx9zqWIh/pvIxsDVhZrCSZ/zZenhnSQHDnIHPKoY4Zzhb5Ff4VZlDsIKz6bGqY2YDW+fsDK/4VZtisAiamsaPMGNQBDbP7/3CWXfKPMEPlm4ORlk9hVP93r/E6VJlj/rdHKd3mB1CtDe00fclza8EGVaJ0vY6P1BQoViMqlhGB89scrOIDXzELcFhXcxhbGpjRf33GlPjEVzyBqIGZclauZ8b/fdv/d5nZksqLMdI5L9v/4a3/AjMLt+q2c5nSG6p8qGykrlse1hniG19xnMksMy3CLcOmmwNKmLv5Oj7k0frTeN22H+M72PBYv+LclONrsQnjqVpW/kbV4f8AHefeSC51gZgAAAAASUVORK5CYII=',
'external_logo': 'https://awsmp-logos.s3.amazonaws.com/4675c3b9-6053-4a8c-8619-6519b83bbbfd/536ec8b5c79de08fcac1086fdf74f91b.png'
},
default_mapping="question").plot()
g.encode_point_icon(
'score',
as_text=True,
continuous_binning=[
[33, 'low'],
[66, 'mid'],
[200, 'high']
]).plot()
g.encode_point_icon(
'score',
as_text=True,
continuous_binning=[
[33, 'low'],
[66, 'mid'],
[None, 'high']
],
default_mapping='?'
).plot()
codes = pd.read_csv('https://raw.githubusercontent.com/datasets/country-codes/master/data/country-codes.csv')
codes.columns
country_to_iso_flag = {
o['CLDR display name']: 'flag-icon-' + o['ISO3166-1-Alpha-2'].lower()
for o in codes[['CLDR display name', 'ISO3166-1-Alpha-2']].dropna().to_dict('records')
}
g.encode_point_icon(
'origin',
shape="circle",
categorical_mapping=country_to_iso_flag,
default_mapping="question").plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Icons as categorical mappings + glyph types
Step2: Icons as continuous mappings and text
Step3: Special continuous bins
Step4: Flag inference
|
1,360
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
print accuracy_score(outcomes, predictions)
vs.survival_stats(data, outcomes, 'Sex')
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'male' :
predictions.append(0)
else :
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
print accuracy_score(outcomes, predictions)
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'female':
predictions.append(1)
else :
if passenger['Age'] <= 10 :
predictions.append(1)
else :
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
print accuracy_score(outcomes, predictions)
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'","Parch > 1"])
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'female':
if passenger['Age'] > 40 and passenger['Age'] < 60 and passenger['Pclass'] == 3:
predictions.append(0)
else:
if passenger['SibSp'] < 3 :
predictions.append(1)
else :
predictions.append(0)
else:
if passenger['Age'] <= 10:
predictions.append(1)
elif passenger['Pclass'] == 1 and passenger['Age'] <= 40:
if passenger['SibSp'] < 3 :
predictions.append(1)
else :
predictions.append(0)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
Step5: Tip
Step6: Question 1
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Step18: Question 4
|
1,361
|
<ASSISTANT_TASK:>
Python Code:
13 % 5 == 3
12 ** 2 == 144
146 % 67 == 12
19 % 8.5 == 2
4 ** (3 % 11)
1024 `megabytes` = 1 `??`
1024 `gigabyte` = 1 `??`
1024 `terabyte` = 1 `??`
movies_per_gb = 1024.0/650.0
print(movies_per_gb) #number of videos I can store in 1GB
movies_per_tb = float(movies_per_gb * 1024)
print(movies_per_tb)
print(movies_per_tb * 3) #movies I can fit on my 3TB drive
print(((1024.0/650.0) * 1024.0) * 3.0)
32 16 8 4 2 1
--------------
0 0 0 1 0 1
Yes.
16 8 4 2 1
--------------
1
Move to next bit. NUMBER = 27 - 16 = 11
Does 8 go into 11?
Yes.
16 8 4 2 1
--------------
1 1
Move to next bit. NUMBER = 9 - 8 = 1
Does 3 go into 4?
No.
16 8 4 2 1
--------------
1 1 0
Move to next bit. NUMBER still equals 1
Does 3 go into 2?
Yes.
16 8 4 2 1
--------------
1 1 0 1
Move to next bit. NUMBER still equals 1
Does 1 go into 1?
Yes.
16 8 4 2 1
--------------
1 1 0 1 1
Move to next bit. NUMBER = 1 - 1 = 0.
We are at the right most bit.
So, the binary representation for 27 is:
11011
128 64 32 16 8 4 2 1
---------------------------
0 1 1 1 1 1 1 1 (127)
0 0 0 0 0 0 1 0 (2)
0 1 0 0 0 0 0 0 (64)
0 1 0 0 1 1 1 1 (79)
0 1 1 0 0 0 1 0 (34)
---
128 64 32 16 8 4 2 1
---------------------------
1 1 1 0 1 1 1 1 (239)
0 0 0 1 1 1 1 0 (30)
1 1 1 1 0 0 0 1 (241)
1 0 1 0 1 0 1 0 (170)
0 1 0 1 0 1 0 1 (85)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Binary
Step2: 1024 byte = 1 kilobyte
Step3: Binary
Step4: Denary to Binary and Binary to Denary
Step5: Exercise
|
1,362
|
<ASSISTANT_TASK:>
Python Code:
# Importing all the required modules and the helper functions
import numpy as np
import urllib.request
from bs4 import BeautifulSoup
from nltk import sent_tokenize
from nltk import word_tokenize
import re
from gensim.models import Word2Vec
import pickle
# the following two modules are helper functions to generate features from the sentences
def get_feature(text,feature_dimension,wordset,model, label = None):
features = None
for sample in text:
paragraph = sample.lower()
sentences = sent_tokenize(paragraph)
for sentence in sentences:
feature_vector = np.zeros(feature_dimension)
words = word_tokenize(sentence)
count = 0
for word in words:
if word in wordset and word.isalnum():
count = count + 1
feature_vector = feature_vector + model[word]
if count != 0:
feature_vector = feature_vector / float(count)
if label is not None:
feature_vector = np.append(feature_vector, label)
feature_vector = feature_vector[np.newaxis]
if features is None:
features = feature_vector
else:
features = np.concatenate((features, feature_vector))
return features
def generate_features(feature_dimension,wordset,model):
with open("patient.txt") as patfile:
patient = patfile.readlines()
patfile.close()
with open("doctor.txt") as docfile:
doctor = docfile.readlines()
docfile.close()
patient_features = get_feature(patient,feature_dimension,wordset,model,label=0)
doctor_features = get_feature(doctor,feature_dimension,wordset,model,label=1)
features = np.concatenate((patient_features,doctor_features))
return features
def predict(clf, text,feature_dimension, wordset, model):
paragraph = text.lower()
sentences = sent_tokenize(paragraph)
features = get_feature([text],feature_dimension,wordset,model)
pred = clf.predict(features)
for i,item in enumerate(pred):
if item == 0:
ret = "patient"
else:
ret = "doctor"
print("{} : {}".format(sentences[i],ret))
print()
return pred
base_url = "https://www.askthedoctor.com/browse-medical-questions"
base_f = urllib.request.urlopen(base_url)
base_soup = BeautifulSoup(base_f,"lxml")
# categories of diseases
categories = [(base_anchor["href"],base_anchor["title"]) for base_div in base_soup.findAll("div",{"class":"disease_column"}) for base_anchor in base_div.findAll("a",{"itemtype":"https://schema.org/category"})]
print("Collecting data ... ")
with open("patient.txt","w") as patientfile, open("doctor.txt", "w") as doctorfile:
for category in categories:
topic = category[1]
print(topic)
try:
url = category[0]
f = urllib.request.urlopen(url)
soup = BeautifulSoup(f,"lxml")
divs = soup.findAll('div',{"class":"question_az"})
for i,div in enumerate(divs):
inner_url = div.find('a')['href']
inner_f = urllib.request.urlopen(inner_url)
inner_soup = BeautifulSoup(inner_f,"lxml")
question = inner_soup.find('span',{"class":"quesans"})
question = question.text.replace(","," ")
question = re.sub('[.]+', '.',question)
for token in sent_tokenize(question):
if len(word_tokenize(token)) > 3:
patientfile.write("{}\n".format(token))
answer = inner_soup.find('span', {"class": "answer quesans"})
answer = answer.text.replace( \n(adsbygoogle = window.adsbygoogle || []).push({});,"").replace("\n"," ").replace(" "," ").replace(","," ")
answer = re.sub('[.]+', '.',answer)
for token in sent_tokenize(answer):
if len(word_tokenize(token)) > 3:
doctorfile.write("{}\n".format(token))
except:
print("Error ................ {}".format(topic))
patientfile.close()
doctorfile.close()
print("Data saved !")
data_matrix = []
with open("patient.txt","r") as patfile, open("doctor.txt","r") as docfile:
data_matrix = patfile.readlines()
data_matrix.extend(docfile.readlines())
patfile.close()
docfile.close()
# converting the whole data into lower case
data_matrix = [sample.lower() for sample in data_matrix]
print("The Dataset consists of {} sentences.".format(len(data_matrix)))
# Formatting the data to provide as input to gensim package's word2vec model
words_matrix = []
for sample in data_matrix:
sentences = sent_tokenize(sample)
for sentence in sentences:
words = word_tokenize(sentence)
words_new = [word for word in words if word.isalnum()]
words_matrix.append(words_new)
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s',\
level=logging.INFO)
# Parameters required for training word2vec model
num_features = 300 # Word vector dimensionality
min_word_count = 5 # Minimum word count
num_workers = 4 # Number of threads to run in parallel
context = 10 # Context window size
downsampling = 1e-3 # Downsample setting for frequent words
# Initialize and train the model (this will take some time)
from gensim.models import word2vec
print("Training model...")
model = word2vec.Word2Vec(words_matrix, workers=num_workers, \
size=num_features, min_count = min_word_count, \
window = context, sample = downsampling)
# If you don't plan to train the model any further, calling
# init_sims will make the model much more memory-efficient.
model.init_sims(replace=True)
# It can be helpful to create a meaningful model name and
# save the model for later use. You can load it later using Word2Vec.load()
model_name = "word2vec_model"
model.save(model_name)
print("Word2Vec model saved !")
model = Word2Vec.load("word2vec_model")
word_vectors = model.wv.syn0
print("The model has {} words in the vocabulary and the dimension of the vectors is {}".format(word_vectors.shape[0],word_vectors.shape[1]))
print("I\n{}\n".format(model.most_similar("i")))
print("swelling\n{}\n".format(model.most_similar("swelling")))
print("headache\n{}\n".format(model.most_similar("headache")))
print("fever\n{}\n".format(model.most_similar("fever")))
model = Word2Vec.load("word2vec_model")
word_vectors = model.wv.syn0
feature_dimension = word_vectors.shape[1]
# all words in the vocabulary
wordset = set(model.wv.index2word)
print("Generating features ...")
features = generate_features(feature_dimension,wordset,model)
# dividing the dataset into train and test datasets.
indices = np.random.permutation(features.shape[0])
test_idx,training_idx = indices[:2000], indices[2000:]
test_features, train_features = features[test_idx,:], features[training_idx,:]
train_labels = train_features[:,-1]
train_features = train_features[:,:-1]
test_labels = test_features[:,-1]
test_features = test_features[:,:-1]
# Saving features
np.save("train_features.npy",train_features)
np.save("train_labels.npy", train_labels)
np.save("test_features.npy",test_features)
np.save("test_labels.npy",test_labels)
print("Features saved")
# Loading train features
train_features = np.load("train_features.npy")
train_labels = np.load("train_labels.npy")
from sklearn.svm import SVC
clf = SVC(kernel="linear",C=100)
print("Training classifier ....")
clf.fit(train_features,train_labels)
import pickle
with open("clf_model.pkl","wb") as clffile:
pickle.dump(clf,clffile)
clffile.close()
print("Classifier Model saved !")
with open("clf_model.pkl","rb") as clffile:
clf = pickle.load(clffile)
clffile.close()
test_features = np.load("test_features.npy")
test_labels = np.load("test_labels.npy")
pred = clf.predict(test_features)
from sklearn.metrics import accuracy_score
acc = accuracy_score(test_labels,pred)*100
print("The accuracy for {} samples for the model : {}%".format(test_features.shape[0],acc))
with open("clf_model.pkl","rb") as clffile:
clf = pickle.load(clffile)
clffile.close()
model = Word2Vec.load("word2vec_model")
word_vectors = model.wv.syn0
feature_dimension = word_vectors.shape[1]
# all words in the vocabulary
wordset = set(model.wv.index2word)
text = "i still cough few times a day. what should i do?"
res = predict(clf, text,feature_dimension, wordset, model)
text = "i have severe pain in my abdomen. do i have to go to the doctor? wash your hands everytime and follow hygenic practices"
res = predict(clf, text,feature_dimension, wordset, model)
text = "i have a sore throat. it has been there for the past week."
res = predict(clf, text,feature_dimension, wordset, model)
text = "do you have sore throat? Does your throat feel itchy? Do you have flu?"
res = predict(clf, text,feature_dimension, wordset, model)
text = "you should apply neomycin ointment on your chin"
res = predict(clf, text,feature_dimension, wordset, model)
text = "do you think I have infection which is causing my blood pressure to rise? Yes, your blood pressure is increasing because of infection"
res = predict(clf, text,feature_dimension, wordset, model)
text = "Are you comfortable? If you are not comfortable, please let me know. No I am not comfortable and in too much in pain right now."
res = predict(clf, text,feature_dimension, wordset, model)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Data collection
Step3: Word2Vec
Step4: Testing Word2Vec Model
Step5: Saving the features
Step6: Training Classifier Model
Step7: Evaluating Classifier Model
Step8: Trying out some random sentences
|
1,363
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'a':[12,13,23,22,23,24,30,35,55], 'b':[1,1,1,2,2,2,3,3,3]})
import numpy as np
def g(df):
return df.groupby("b")["a"].agg([np.mean, np.std])
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,364
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import display
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
# データセットを読み込む
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data', header=None)
df.shape
df.head()
# 30この特徴量をNumPy配列のオブジェクトXに割り当てる
from sklearn.preprocessing import LabelEncoder
X = df.loc[:, 2:].values
y = df.loc[:, 1].values
# M/Bを1/0にする
le = LabelEncoder()
y = le.fit_transform(y)
le.transform(['M', 'B'])
# トレーニングデータセットとテストデータセットに分割する
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=1)
display(X_train.shape)
display(X_test.shape)
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
# スケーリング(標準化)、主成分分析(PCA)、ロジスティック回帰をパイプラインで連結する
pipe_lr = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', LogisticRegression(random_state=1))
])
pipe_lr.fit(X_train, y_train)
print('Test Accuracy: {:.3f}'.format(pipe_lr.score(X_test, y_test)))
import numpy as np
from sklearn.cross_validation import StratifiedKFold
# 分割元データ、分割数、乱数生成器の状態を指定し、層化k分割交差検証のインスタンスを生成
kfold = StratifiedKFold(y=y_train, n_folds=10, random_state=1)
scores = []
for k, (train, test) in enumerate(kfold):
pipe_lr.fit(X_train[train], y_train[train])
score = pipe_lr.score(X_train[test], y_train[test])
scores.append(score)
print('Fold: {}, Class dist.: {}, Acc: {:.3f}'.format(k + 1, np.bincount(y_train[train]), score))
# 正解率の平均と標準偏差を出力
print('CV accuracy: {:.3f} +/- {:.3f}'.format(np.mean(scores), np.std(scores)))
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import cross_val_score
else:
from sklearn.model_selection import cross_val_score
# 交差検証の cross_val_score 関数でモデルの正解率を算出
scores = cross_val_score(estimator=pipe_lr, X=X_train, y=y_train, cv=10, n_jobs=1)
print('CV accuracy scores: {}'.format(scores))
print('CV accuracy: {:.3f} +/- {:.3f}'.format(np.mean(scores), np.std(scores)))
import matplotlib.pyplot as plt
if Version(sklearn_version) < '0.18':
from sklearn.learning_curve import learning_curve
else:
from sklearn.model_selection import learning_curve
pipe_lr = Pipeline([('scl', StandardScaler()),
('clf', LogisticRegression(penalty='l2', random_state=0))
])
# learning_curve 関数で交差検証による正解率を算出
train_sizes, train_scores, test_scores = learning_curve(estimator=pipe_lr,
X=X_train, y=y_train,
train_sizes=np.linspace(0.1, 1.0, 10),
cv=10, n_jobs=1)
display(train_sizes)
# 平均と標準偏差を求める
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean, color='blue', marker='o', markersize=5, label='training accuracy')
# fill_between 関数で平均±標準偏差の幅を塗りつぶす
plt.fill_between(train_sizes, train_mean + train_std, train_mean - train_std, alpha=0.15, color='blue')
plt.plot(train_sizes, test_mean, color='green', linestyle='--', marker='s', markersize=5, label='validation accuracy')
plt.fill_between(train_sizes, test_mean + test_std, test_mean - test_std, alpha=0.15, color='green')
plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim([0.8, 1.0])
plt.show()
if Version(sklearn_version) < '0.18':
from sklearn.learning_curve import validation_curve
else:
from sklearn.model_selection import validation_curve
param_range = [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]
# validation_curve関数によりモデルのパラメータを変化させ、交差検証による正解率を算出
# clf__C は LogisticRegression オブジェクトのパラメータ
train_scores, test_scores = validation_curve(estimator=pipe_lr,
X=X_train, y=y_train,
param_name='clf__C',
param_range=param_range,
cv=10)
# 平均と標準偏差を求める
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean, color='blue', marker='o', markersize=5, label='training accuracy')
plt.fill_between(param_range, train_mean + train_std, train_mean - train_std, alpha=0.15, color='blue')
plt.plot(param_range, test_mean, color='green', linestyle='--', marker='s', markersize=5, label='validation accuracy')
plt.fill_between(param_range, test_mean + test_std, test_mean - test_std, alpha=0.15, color='green')
plt.grid()
plt.xscale('log')
plt.xlabel('Parameter C')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim([0.8, 1.0])
plt.show()
from sklearn.svm import SVC
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
pipe_svc = Pipeline([('scl', StandardScaler()),
('clf', SVC(random_state=1))
])
param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
param_grid = [{'clf__C': param_range, 'clf__kernel': ['linear']},
{'clf__C': param_range, 'clf__gamma': param_range, 'clf__kernel': ['rbf']}
]
# ハイパーパラメータ値のリストparam_gridを指定し、グリッドサーチを行うGridSearchCVクラスのインスタンス化
gs = GridSearchCV(estimator=pipe_svc, param_grid=param_grid, scoring='accuracy', cv=10, n_jobs=-1)
gs = gs.fit(X_train, y_train)
# モデルの最良スコアを出力
print(gs.best_score_)
# 最良スコアとなるパラメータ値を出力
print(gs.best_params_)
# 選択されたモデルの性能を評価する
clf = gs.best_estimator_
display(clf)
clf.fit(X_train, y_train)
print('Test accuracy: {:.3f}'.format(clf.score(X_test, y_test)))
for params, mean_score, socres in gs.grid_scores_:
print('{:.3f} (+/-{:.3f}) for {}'.format(mean_score, scores.std(), params))
gs = GridSearchCV(estimator=pipe_svc, param_grid=param_grid, scoring='accuracy', cv=2, n_jobs=-1)
scores = cross_val_score(gs, X_train, y_train, scoring='accuracy', cv=5)
print('CV accuracy: {:.3f} +/-{:.3f}'.format(np.mean(scores), np.std(scores)))
from sklearn.tree import DecisionTreeClassifier
# ハイパーパラメーター値として決定木の深さパラメータを指定し、
# グリッドサーチを行う GridSearchCV クラスをインスタンス化
estimator = DecisionTreeClassifier(random_state=0)
param_grid = [{'max_depth': [1, 2, 3, 4, 5, 6, 7, None]}]
gs = GridSearchCV(estimator=estimator, param_grid=param_grid, scoring='accuracy', cv=2)
scores = cross_val_score(gs, X_train, y_train, scoring='accuracy', cv=5)
print('CV accuracy: {:.3f} +/-{:.3f}'.format(np.mean(scores), np.std(scores)))
from sklearn.metrics import confusion_matrix
pipe_svc.fit(X_train, y_train)
y_pred = pipe_svc.predict(X_test)
# テストと予測データから混同行列を生成
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
print(confmat)
# 図のサイズを指定
fig, ax = plt.subplots(figsize=(2.5, 2.5))
# matshow 関数で行列からヒートマップを描画
ax.matshow(confmat, cmap=plt.cm.Blues, alpha=0.3)
for i in range(confmat.shape[0]):
for j in range(confmat.shape[1]):
ax.text(x=j, y=i, s=confmat[i, j], va='center', ha='center')
plt.xlabel('predicted label')
plt.ylabel('true label')
plt.show()
# 適合率、再現率、F1 スコアを出力
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
print('Precision: {:.3f}'.format(precision_score(y_true=y_test, y_pred=y_pred)))
print('Recall: {:.3f}'.format(recall_score(y_true=y_test, y_pred=y_pred)))
print('F1: {:.3f}'.format(f1_score(y_true=y_test, y_pred=y_pred)))
# カスタムの性能指標を出力
from sklearn.metrics import make_scorer, f1_score
scorer = make_scorer(f1_score, pos_label=0)
param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
param_grid = [{'clf__C': param_range, 'clf__kernel': ['linear']},
{'clf__C': param_range, 'clf__gamma': param_range, 'clf__kernel': ['rbf']}
]
gs = GridSearchCV(estimator=pipe_svc, param_grid=param_grid, scoring=scorer, cv=10)
gs = gs.fit(X_train, y_train)
# モデルの最良スコアを出力
print(gs.best_score_)
# 最良スコアとなるパラメータ値を出力
print(gs.best_params_)
from sklearn.metrics import roc_curve, auc
from scipy import interp
# スケーリング、主成分分析、ロジスティック回帰を指定して、Pipelineを作る
pipe_lr = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', LogisticRegression(penalty='l2', random_state=0, C=100.0))
])
# 2つの特徴量を抽出
X_train2 = X_train[:, [4, 14]]
# 層化k分割交差検証イテレータを表す StratifiedKFold クラスをインスタンス化
cv = StratifiedKFold(y_train, n_folds=3, random_state=1)
fig = plt.figure(figsize=(7, 5))
mean_tpr = 0.0
# 0から1までの間に100個の要素を生成
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
for i, (train, test) in enumerate(cv):
# predict_probaメソッドで確率を予測、fitメソッドでモデルに適合させる
probas = pipe_lr.fit(X_train2[train], y_train[train]).predict_proba(X_train2[test])
# roc_curve関数でROC曲線の性能を計算してプロット
fpr, tpr, thresholds = roc_curve(y_train[test], probas[:, 1], pos_label=1)
# FPR(X軸)とTPR(Y軸)を線形補完
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
# 曲線下面積(AUC)を計算
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, label='ROC fold {:d} (area = {:.2f})'.format(i + 1, roc_auc))
# 当て推量をプロット
plt.plot([0, 1], [0, 1], linestyle='--', color=(0.6, 0.6, 0.6), label='random guessing')
# FPR, TPR, ROC AUC それぞれの平均を計算してプロット
mean_tpr /= len(cv)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--', label='mean ROC (area = {:.2f})'.format(mean_auc), lw=2)
# 完全に予測が正解したときのROC曲線をプロット
plt.plot([0, 0, 1], [0, 1, 1], lw=2, linestyle=':', color='black', label='perfect performance')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.title('Receiver Operator Characteristic')
plt.legend(loc='lower right')
plt.show()
# 予測値のAUCと正解率を計算して出力
pipe_lr = pipe_lr.fit(X_train2, y_train)
y_pred2 = pipe_lr.predict(X_test[:, [4, 14]])
from sklearn.metrics import roc_auc_score, accuracy_score
print('ROC AUC: {:.3f}'.format(roc_auc_score(y_true=y_test, y_score=y_pred2)))
print('Accuracy: {:.3f}'.format(accuracy_score(y_true=y_test, y_pred=y_pred2)))
from sklearn.metrics import precision_score, make_scorer
pre_scorer = make_scorer(score_func=precision_score,
pos_label=1,
greater_is_better=True,
average='micro')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 6.1.2 パイプラインで変換器と推定器を結合する
Step2: 6.2 k分割交差検証を使ったモデルの性能の評価
Step3: 6.3 学習曲線と検証曲線によるアルゴリズムの診断
Step4: 6.3.2 検証曲線を使って過学習と学習不足を明らかにする
Step5: 6.4 グリッドサーチによる機械学習モデルのチューニング
Step6: 6.4.2 入れ子式の交差検証によるアルゴリズムの選択
Step7: 6.5 さまざまな性能評価指標
Step8: 6.5.2 分類モデルの適合率と再現率を最適化する
Step9: 6.5.3 ROC曲線をプロットする
Step10: 6.5.4 他クラス分類のための性能指標
|
1,365
|
<ASSISTANT_TASK:>
Python Code:
import qiskit as qk
import numpy as np
from scipy.optimize import curve_fit
from qiskit.tools.qcvv.fitters import exp_fit_fun, osc_fit_fun, plot_coherence
from qiskit.wrapper.jupyter import *
# Load saved IBMQ accounts
qk.IBMQ.load_accounts()
# backend and token settings
backend = qk.IBMQ.get_backend('ibmq_16_melbourne') # the device to run on
shots = 1024 # the number of shots in the experiment
# function for padding with QId gates
def pad_QId(circuit,N,qr):
# circuit to add to, N = number of QId gates to add, qr = qubit reg
for ii in range(N):
circuit.barrier(qr)
circuit.iden(qr)
return circuit
# Select qubit whose T1 is to be measured
qubit=1
# Creating registers
qr = qk.QuantumRegister(5)
cr = qk.ClassicalRegister(5)
# the delay times are all set in terms of single-qubit gates
# so we need to calculate the time from these parameters
params = backend.properties()['qubits'][qubit]
pulse_length=params['gateTime']['value'] # single-qubit gate time
buffer_length=params['buffer']['value'] # spacing between pulses
unit = params['gateTime']['unit']
steps=10
gates_per_step=120
max_gates=(steps-1)*gates_per_step+1
tot_length=buffer_length+pulse_length
time_per_step=gates_per_step*tot_length
qc_dict={}
for ii in range(steps):
step_num='step_%s'%(str(ii))
qc_dict.update({step_num:qk.QuantumCircuit(qr, cr)})
qc_dict[step_num].x(qr[qubit])
qc_dict[step_num]=pad_QId(qc_dict[step_num],gates_per_step*ii,qr[qubit])
qc_dict[step_num].barrier(qr[qubit])
qc_dict[step_num].measure(qr[qubit], cr[qubit])
circuits=list(qc_dict.values())
%%qiskit_job_status
# run the program
status = backend.status()
if status['operational'] == False or status['pending_jobs'] > 10:
print('Warning: the selected backend appears to be busy or unavailable at present; consider choosing a different one if possible')
t1_job=qk.execute(circuits, backend, shots=shots)
# arrange the data from the run
result_t1 = t1_job.result()
keys_0_1=list(result_t1.get_counts(qc_dict['step_0']).keys())# get the key of the excited state '00001'
data=np.zeros(len(qc_dict.keys())) # numpy array for data
sigma_data = np.zeros(len(qc_dict.keys()))
# change unit from ns to microseconds
plot_factor=1
if unit.find('ns')>-1:
plot_factor=1000
punit='$\\mu$s'
xvals=time_per_step*np.linspace(0,len(qc_dict.keys()),len(qc_dict.keys()))/plot_factor # calculate the time steps in microseconds
for ii,key in enumerate(qc_dict.keys()):
# get the data in terms of counts for the excited state normalized to the total number of counts
data[ii]=float(result_t1.get_counts(qc_dict[key])[keys_0_1[1]])/shots
sigma_data[ii] = np.sqrt(data[ii]*(1-data[ii]))/np.sqrt(shots)
# fit the data to an exponential
fitT1, fcov = curve_fit(exp_fit_fun, xvals, data, bounds=([-1,2,0], [1., 500, 1]))
ferr = np.sqrt(np.diag(fcov))
plot_coherence(xvals, data, sigma_data, fitT1, exp_fit_fun, punit, 'T$_1$ ', qubit)
print("a: " + str(round(fitT1[0],2)) + u" \u00B1 " + str(round(ferr[0],2)))
print("T1: " + str(round(fitT1[1],2))+ " µs" + u" \u00B1 " + str(round(ferr[1],2)) + ' µs')
print("c: " + str(round(fitT1[2],2)) + u" \u00B1 " + str(round(ferr[2],2)))
str(params['T1']['value']) +' ' + params['T1']['unit']
# Select qubit on which to measure T2*
qubit=1
# Creating registers
qr = qk.QuantumRegister(5)
cr = qk.ClassicalRegister(5)
params = backend.properties()['qubits'][qubit]
pulse_length=params['gateTime']['value'] # single-qubit gate time
buffer_length=params['buffer']['value'] # spacing between pulses
unit = params['gateTime']['unit']
steps=35
gates_per_step=20
max_gates=(steps-1)*gates_per_step+2
num_osc=5
tot_length=buffer_length+pulse_length
time_per_step=gates_per_step*tot_length
qc_dict={}
for ii in range(steps):
step_num='step_%s'%(str(ii))
qc_dict.update({step_num:qk.QuantumCircuit(qr, cr)})
qc_dict[step_num].h(qr[qubit])
qc_dict[step_num]=pad_QId(qc_dict[step_num],gates_per_step*ii,qr[qubit])
qc_dict[step_num].u1(2*np.pi*num_osc*ii/(steps-1),qr[qubit])
qc_dict[step_num].h(qr[qubit])
qc_dict[step_num].barrier(qr[qubit])
qc_dict[step_num].measure(qr[qubit], cr[qubit])
circuits=list(qc_dict.values())
%%qiskit_job_status
# run the program
status = backend.status()
if status['operational'] == False or status['pending_jobs'] > 10:
print('Warning: the selected backend appears to be busy or unavailable at present; consider choosing a different one if possible')
t2star_job=qk.execute(circuits, backend, shots=shots)
# arrange the data from the run
result_t2star = t2star_job.result()
keys_0_1=list(result_t2star.get_counts(qc_dict['step_0']).keys())# get the key of the excited state '00001'
# change unit from ns to microseconds
plot_factor=1
if unit.find('ns')>-1:
plot_factor=1000
punit='$\\mu$s'
xvals=time_per_step*np.linspace(0,len(qc_dict.keys()),len(qc_dict.keys()))/plot_factor # calculate the time steps
data=np.zeros(len(qc_dict.keys())) # numpy array for data
sigma_data = np.zeros(len(qc_dict.keys()))
for ii,key in enumerate(qc_dict.keys()):
# get the data in terms of counts for the excited state normalized to the total number of counts
data[ii]=float(result_t2star.get_counts(qc_dict[key])[keys_0_1[1]])/shots
sigma_data[ii] = np.sqrt(data[ii]*(1-data[ii]))/np.sqrt(shots)
fitT2s, fcov = curve_fit(osc_fit_fun, xvals, data, p0=[0.5, 100, 1/10, np.pi, 0], bounds=([0.3,0,0,0,0], [0.5, 200, 1/2,2*np.pi,1]))
ferr = np.sqrt(np.diag(fcov))
plot_coherence(xvals, data, sigma_data, fitT2s, osc_fit_fun, punit, '$T_2^*$ ', qubit)
print("a: " + str(round(fitT2s[0],2)) + u" \u00B1 " + str(round(ferr[0],2)))
print("T2*: " + str(round(fitT2s[1],2))+ " µs"+ u" \u00B1 " + str(round(ferr[1],2)) + ' µs')
print("f: " + str(round(10**3*fitT2s[2],3)) + 'kHz' + u" \u00B1 " + str(round(10**6*ferr[2],3)) + 'kHz')
print("phi: " + str(round(fitT2s[3],2)) + u" \u00B1 " + str(round(ferr[3],2)))
print("c: " + str(round(fitT2s[4],2)) + u" \u00B1 " + str(round(ferr[4],2)))
# Select qubit to measure T2 echo on
qubit=1
# Creating registers
qr = qk.QuantumRegister(5)
cr = qk.ClassicalRegister(5)
params = backend.properties()['qubits'][qubit]
pulse_length=params['gateTime']['value'] # single-qubit gate time
buffer_length=params['buffer']['value'] # spacing between pulses
unit = params['gateTime']['unit']
steps=18
gates_per_step=28
tot_length=buffer_length+pulse_length
max_gates=(steps-1)*2*gates_per_step+3
time_per_step=(2*gates_per_step)*tot_length
qc_dict={}
for ii in range(steps):
step_num='step_%s'%(str(ii))
qc_dict.update({step_num:qk.QuantumCircuit(qr, cr)})
qc_dict[step_num].h(qr[qubit])
qc_dict[step_num]=pad_QId(qc_dict[step_num],gates_per_step*ii,qr[qubit])
qc_dict[step_num].x(qr[qubit])
qc_dict[step_num]=pad_QId(qc_dict[step_num],gates_per_step*ii,qr[qubit])
qc_dict[step_num].h(qr[qubit])
qc_dict[step_num].barrier(qr[qubit])
qc_dict[step_num].measure(qr[qubit], cr[qubit])
circuits=list(qc_dict.values())
%%qiskit_job_status
# run the program
status = backend.status()
if status['operational'] == False or status['pending_jobs'] > 10:
print('Warning: the selected backend appears to be busy or unavailable at present; consider choosing a different one if possible')
t2echo_job=qk.execute(circuits, backend, shots=shots)
# arrange the data from the run
result_t2echo = t2echo_job.result()
keys_0_1=list(result_t2echo.get_counts(qc_dict['step_0']).keys())# get the key of the excited state '00001'
# change unit from ns to microseconds
plot_factor=1
if unit.find('ns')>-1:
plot_factor=1000
punit='$\\mu$s'
xvals=time_per_step*np.linspace(0,len(qc_dict.keys()),len(qc_dict.keys()))/plot_factor # calculate the time steps
data=np.zeros(len(qc_dict.keys())) # numpy array for data
sigma_data = np.zeros(len(qc_dict.keys()))
for ii,key in enumerate(qc_dict.keys()):
# get the data in terms of counts for the excited state normalized to the total number of counts
data[ii]=float(result_t2echo.get_counts(qc_dict[key])[keys_0_1[1]])/shots
sigma_data[ii] = np.sqrt(data[ii]*(1-data[ii]))/np.sqrt(shots)
fitT2e, fcov = curve_fit(exp_fit_fun, xvals, data, bounds=([-1,10,0], [1, 150, 1]))
ferr = np.sqrt(np.diag(fcov))
plot_coherence(xvals, data, sigma_data, fitT2e, exp_fit_fun, punit, '$T_{2echo}$ ', qubit)
print("a: " + str(round(fitT2e[0],2)) + u" \u00B1 " + str(round(ferr[0],2)))
print("T2: " + str(round(fitT2e[1],2))+ ' µs' + u" \u00B1 " + str(round(ferr[1],2)) + ' µs')
print("c: " + str(round(fitT2e[2],2)) + u" \u00B1 " + str(round(ferr[2],2)))
str(params['T2']['value']) +' ' + params['T2']['unit']
# Select qubit for CPMG measurement of T2
qubit=1
# Creating registers
qr = qk.QuantumRegister(5)
cr = qk.ClassicalRegister(5)
params = backend.properties()['qubits'][qubit]
pulse_length=params['gateTime']['value'] # single-qubit gate time
buffer_length=params['buffer']['value'] # spacing between pulses
unit = params['gateTime']['unit']
steps=10
gates_per_step=18
num_echo=5 # has to be odd number to end up in excited state at the end
tot_length=buffer_length+pulse_length
time_per_step=((num_echo+1)*gates_per_step+num_echo)*tot_length
max_gates=num_echo*(steps-1)*gates_per_step+num_echo+2
qc_dict={}
for ii in range(steps):
step_num='step_%s'%(str(ii))
qc_dict.update({step_num:qk.QuantumCircuit(qr, cr)})
qc_dict[step_num].h(qr[qubit])
for iii in range(num_echo):
qc_dict[step_num]=pad_QId(qc_dict[step_num], gates_per_step*ii, qr[qubit])
qc_dict[step_num].x(qr[qubit])
qc_dict[step_num]=pad_QId(qc_dict[step_num], gates_per_step*ii, qr[qubit])
qc_dict[step_num].h(qr[qubit])
qc_dict[step_num].barrier(qr[qubit])
qc_dict[step_num].measure(qr[qubit], cr[qubit])
circuits=list(qc_dict.values())
%%qiskit_job_status
# run the program
status = backend.status()
if status['operational'] == False or status['pending_jobs'] > 10:
print('Warning: the selected backend appears to be busy or unavailable at present; consider choosing a different one if possible')
t2cpmg_job=qk.execute(circuits, backend, shots=shots)
# arrange the data from the run
result_t2cpmg = t2cpmg_job.result()
keys_0_1=list(result_t2cpmg.get_counts(qc_dict['step_0']).keys())# get the key of the excited state '00001'
# change unit from ns to microseconds
plot_factor=1
if unit.find('ns')>-1:
plot_factor=1000
punit='$\\mu$s'
xvals=time_per_step*np.linspace(0,len(qc_dict.keys()),len(qc_dict.keys()))/plot_factor # calculate the time steps
data=np.zeros(len(qc_dict.keys())) # numpy array for data
sigma_data = np.zeros(len(qc_dict.keys()))
for ii,key in enumerate(qc_dict.keys()):
# get the data in terms of counts for the excited state normalized to the total number of counts
data[ii]=float(result_t2cpmg.get_counts(qc_dict[key])[keys_0_1[1]])/shots
sigma_data[ii] = np.sqrt(data[ii]*(1-data[ii]))/np.sqrt(shots)
fitT2cpmg, fcov = curve_fit(exp_fit_fun, xvals, data, bounds=([-1,10,0], [1, 150, 1]))
ferr = np.sqrt(np.diag(fcov))
plot_coherence(xvals, data, sigma_data, fitT2cpmg, exp_fit_fun, punit, '$T_{2cpmg}$ ', qubit)
print("a: " + str(round(fitT2cpmg[0],2)) + u" \u00B1 " + str(round(ferr[0],2)))
print("T2: " + str(round(fitT2cpmg[1],2))+ ' µs' + u" \u00B1 " + str(round(ferr[1],2)) + ' µs')
print("c: " + str(round(fitT2cpmg[2],2)) + u" \u00B1 " + str(round(ferr[2],2)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Measurement of $T_1$
Step2: The last calibration of $T_1$ was measured to be
Step3: Measurement of $T_2^*$
Step4: Measurement of $T_2$ Echo
Step5: The last calibration of $T_2$ was measured to be
Step6: CPMG measurement
|
1,366
|
<ASSISTANT_TASK:>
Python Code:
import gambit
import numpy
L = numpy.array([[5,3], [6,4]])
L
F = numpy.array([[2,1], [3,4]])
F
g = gambit.Game.from_arrays(L, F)
g.players[0].label = "Leader"
g.players[0].strategies[0].label = "S"
g.players[0].strategies[1].label = "C"
g.players[1].label = "Follower"
g.players[1].strategies[0].label = "S"
g.players[1].strategies[1].label = "C"
import IPython.display; IPython.display.HTML(g.write('html'))
sp = g.support_profile()
sp
sp.undominated()
sp.undominated().undominated()
sp.undominated().undominated().undominated() == sp.undominated().undominated()
rest = sp.undominated().undominated().restrict()
rest
rest.unrestrict()
rest.unrestrict() == g
g = gambit.Game.read_game("bagwell.efg")
g
result = gambit.nash.enummixed_solve(g, use_lrs=True)
len(result)
result[0]
result[0].as_behavior()
result[1].as_behavior()
result[2].as_behavior()
probs = [ gambit.Rational(i, 100) for i in xrange(99, 0, -1) ]
eqa = [ ]
for prob in probs:
g.players.chance.infosets[0].actions[0].prob = prob
g.players.chance.infosets[0].actions[1].prob = 1-prob
g.players.chance.infosets[1].actions[0].prob = 1-prob
g.players.chance.infosets[1].actions[1].prob = prob
eqa.append([ eqm.as_behavior()[g.players[0].infosets[0].actions[0]]
for eqm in gambit.nash.enummixed_solve(g, use_lrs=True) ])
import pylab
%matplotlib inline
for (prob, solns) in zip(probs, eqa):
pylab.plot([ prob for eqm in solns ], solns, 'ro')
pylab.xlabel("Probability signal is accurate")
pylab.ylabel("Equilibrium probability of choosing $S$")
pylab.show()
g.players.chance.infosets[0].actions[0].prob = gambit.Rational(3,4)
g.players.chance.infosets[0].actions[1].prob = gambit.Rational(1,4)
g.players.chance.infosets[1].actions[0].prob = gambit.Rational(1,4)
g.players.chance.infosets[1].actions[1].prob = gambit.Rational(3,4)
IPython.display.HTML(g.write('html'))
result = gambit.nash.enummixed_solve(g)
len(result)
result[0]
result[1]
result[2]
with file("bagwell-75.nfg", "w") as f:
f.write(g.write('nfg'))
!gambit-enummixed bagwell-75.nfg
!gambit-enummixed -h
!gambit-enummixed -c bagwell-75.nfg
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bagwell (1993) pointed out that the usual story of Stackelberg commitment to an action depends critically on the observability of the commitment to the action. If there is any chance that the action might be misinterpreted, then the only equilibrium in pure strategies is the equilibrium of the simultaneous-move game. He noted however that if the chance of misinterpreting the action is sufficiently small, then there is an equilibrium in mixed strategies in which the player chooses the Stackelberg action with probability arbitarily close to one.
Step2: And the payoff matrix for the "follower" player
Step3: The constructor Game.from_arrays creates a game in normal form from a collection of payoff matrices, one per player.
Step4: Here, we have set up the row player as the would-be "Leader" player (the one who would have an incentive to commit). The strategy "S" (for both players) would be the subgame-perfect outcome if the row player had the opportunity to commit, where as we can see that "C" is going to be the unique equilibrium (obtainable by iterative elimination of strictly dominated strategies!) if players choose simultaneously.
Step5: The function StrategySupportProfile.undominated() determines which strategies are (strictly) dominated (taking into account only those in the SupportProfile), and returns a new StrategySupportProfile containing only the undominated ones.
Step6: Iterating on this allows you to implement iterative elimination of strictly dominated strategies, to whatever depth you would like.
Step7: Of course after two rounds, we run out of things to eliminate!
Step8: The other useful concept is the StrategicRestriction. A StrategicRestriction is a full-fledged game in its own right, derived from another game by using only a subset of its strategies. StrategySupportProfile objects have a .restrict() member that returns a restricted game.
Step9: A StrategicRestriction remembers its "parent" or "unrestricted" game. So while a StrategicRestriction is itself a game that can be manipulated in its own right, it is also possible to map e.g. mixed strategy profiles defined on a restriction of a game, to a mixed strategy profile on the corresponding unrestricted game.
Step10: The game with imperfectly-observed commitment
Step11: For this example, we want to be sure we compute all equilibria. The lcp_solve method we used in the poker example computes "most" equilibria. For many games, it actually finds all of them, but there do exist games for which it will miss some out. Fortunately, for two-player games, there is an enumeration method which is guaranteed to find all of them. This is implemented as gambit.nash.enummixed_solve().
Step12: So, we have multiple equilibria. Let's inspect them. This method actually returns mixed strategies on the strategic game (a version that uses the sequence form to do enumeration directly on the extensive game is forthcoming!)
Step13: We can view the behaviour equivalent using .as_behavior(), which does the Kuhn's Theorem calculation for us. The first equilibrium is in pure strategies, and corresponds to the simultaneous-move outcome
Step14: The second equilibrium is a low-commitment equilibrium
Step15: And the third equilibrium is a high-commitment equilibrium
Step16: We can then make a plot, in which we vary the probability the signal "correctly" transmits Leader's action, and ask what are the probabilities the leader plays $S$ as a function (technically, correspondance) of that accuracy.
Step17: That's an interesting enough graph. It shows there is a "high-commitment" equilibrium as long as $p\geq \frac{3}{4}$, or $p\leq \frac{1}{4}$. However, game theorists will be surprised to see the two branches of the graph are not connected. After all, we are essentially perturbing the payoffs of the game continuously as we move along the horizontal axis.
Step18: We have three equilibria listed, but note that the second and third both involve the Follower (column) player choosing the same pure strategy (which is to respond to s with S and c with C). In the game, there is a degeneracy
Step19: First, we'll run the gambit-enummixed tool on the game, and we'll see we get the same equilibria out. The equilibria are written out in CSV for convenient parsing.
Step20: The -h switch to any command-line tool gives a standard brief help synopsis.
Step21: The option -c (output connectnedness information) is the one we want; it'll tell us how the three extreme equilibria listed are structured.
|
1,367
|
<ASSISTANT_TASK:>
Python Code:
import pymc3 as pm
with pm.Model() as model:
parameter = pm.Exponential("poisson_param", 1.0)
data_generator = pm.Poisson("data_generator", parameter)
with model:
data_plus_one = data_generator + 1
parameter.tag.test_value
with pm.Model() as model:
theta = pm.Exponential("theta", 2.0)
data_generator = pm.Poisson("data_generator", theta)
with pm.Model() as ab_testing:
p_A = pm.Uniform("P(A)", 0, 1)
p_B = pm.Uniform("P(B)", 0, 1)
print("parameter.tag.test_value =", parameter.tag.test_value)
print("data_generator.tag.test_value =", data_generator.tag.test_value)
print("data_plus_one.tag.test_value =", data_plus_one.tag.test_value)
with pm.Model() as model:
parameter = pm.Exponential("poisson_param", 1.0, testval=0.5)
print("\nparameter.tag.test_value =", parameter.tag.test_value)
with pm.Model() as model:
lambda_1 = pm.Exponential("lambda_1", 1.0)
lambda_2 = pm.Exponential("lambda_2", 1.0)
tau = pm.DiscreteUniform("tau", lower=0, upper=10)
new_deterministic_variable = lambda_1 + lambda_2
import numpy as np
n_data_points = 5 # in CH1 we had ~70 data points
idx = np.arange(n_data_points)
with model:
lambda_ = pm.math.switch(tau >= idx, lambda_1, lambda_2)
import theano.tensor as tt
with pm.Model() as theano_test:
p1 = pm.Uniform("p", 0, 1)
p2 = 1 - p1
p = tt.stack([p1, p2])
assignment = pm.Categorical("assignment", p)
%matplotlib inline
from IPython.core.pylabtools import figsize
import matplotlib.pyplot as plt
import scipy.stats as stats
figsize(12.5, 4)
samples = lambda_1.random(size=20000)
plt.hist(samples, bins=70, density=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8);
data = np.array([10, 5])
with model:
fixed_variable = pm.Poisson("fxd", 1, observed=data)
print("value: ", fixed_variable.tag.test_value)
# We're using some fake data here
data = np.array([10, 25, 15, 20, 35])
with model:
obs = pm.Poisson("obs", lambda_, observed=data)
print(obs.tag.test_value)
tau = np.random.randint(0, 80)
print(tau)
alpha = 1./20.
lambda_1, lambda_2 = np.random.exponential(scale=1/alpha, size=2)
print(lambda_1, lambda_2)
data = np.r_[stats.poisson.rvs(mu=lambda_1, size=tau), stats.poisson.rvs(mu=lambda_2, size = 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau-1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend();
def plot_artificial_sms_dataset():
tau = stats.randint.rvs(0, 80)
alpha = 1./20.
lambda_1, lambda_2 = stats.expon.rvs(scale=1/alpha, size=2)
data = np.r_[stats.poisson.rvs(mu=lambda_1, size=tau), stats.poisson.rvs(mu=lambda_2, size=80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau-1], color="r", label="user behaviour changed")
plt.xlim(0, 80);
figsize(12.5, 5)
plt.title("More example of artificial datasets")
for i in range(4):
plt.subplot(4, 1, i+1)
plot_artificial_sms_dataset()
import pymc3 as pm
# The parameters are the bounds of the Uniform.
with pm.Model() as model:
p = pm.Uniform('p', lower=0, upper=1)
#set constants
p_true = 0.05 # remember, this is unknown.
N = 1500
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurrences = stats.bernoulli.rvs(p_true, size=N)
print(occurrences) # Remember: Python treats True == 1, and False == 0
print(np.sum(occurrences))
# Occurrences.mean is equal to n/N.
print("What is the observed frequency in Group A? %.4f" % np.mean(occurrences))
print("Does this equal the true frequency? %s" % (np.mean(occurrences) == p_true))
#include the observations, which are Bernoulli
with model:
obs = pm.Bernoulli("obs", p, observed=occurrences)
# To be explained in chapter 3
step = pm.Metropolis()
trace = pm.sample(18000, step=step)
burned_trace = trace[1000:]
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(burned_trace["p"], bins=25, histtype="stepfilled", density=True)
plt.legend();
import pymc3 as pm
figsize(12, 4)
#these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
#notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
#generate some observations
observations_A = stats.bernoulli.rvs(true_p_A, size=N_A)
observations_B = stats.bernoulli.rvs(true_p_B, size=N_B)
print("Obs from Site A: ", observations_A[:30], "...")
print("Obs from Site B: ", observations_B[:30], "...")
print(np.mean(observations_A))
print(np.mean(observations_B))
# Set up the pymc3 model. Again assume Uniform priors for p_A and p_B.
with pm.Model() as model:
p_A = pm.Uniform("p_A", 0, 1)
p_B = pm.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
delta = pm.Deterministic("delta", p_A - p_B)
# Set of observations, in this case we have two observation datasets.
obs_A = pm.Bernoulli("obs_A", p_A, observed=observations_A)
obs_B = pm.Bernoulli("obs_B", p_B, observed=observations_B)
# To be explained in chapter 3.
step = pm.Metropolis()
trace = pm.sample(20000, step=step)
burned_trace=trace[1000:]
p_A_samples = burned_trace["p_A"]
p_B_samples = burned_trace["p_B"]
delta_samples = burned_trace["delta"]
figsize(12.5, 10)
#histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", density=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", density=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", density=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right");
# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print("Probability site A is WORSE than site B: %.3f" % \
np.mean(delta_samples < 0))
print("Probability site A is BETTER than site B: %.3f" % \
np.mean(delta_samples > 0))
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables");
import pymc3 as pm
N = 100
with pm.Model() as model:
p = pm.Uniform("freq_cheating", 0, 1)
with model:
true_answers = pm.Bernoulli("truths", p, shape=N, testval=np.random.binomial(1, 0.5, N))
with model:
first_coin_flips = pm.Bernoulli("first_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N))
print(first_coin_flips.tag.test_value)
with model:
second_coin_flips = pm.Bernoulli("second_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N))
import theano.tensor as tt
with model:
val = first_coin_flips*true_answers + (1 - first_coin_flips)*second_coin_flips
observed_proportion = pm.Deterministic("observed_proportion", tt.sum(val)/float(N))
observed_proportion.tag.test_value
X = 35
with model:
observations = pm.Binomial("obs", N, observed_proportion, observed=X)
# To be explained in Chapter 3!
with model:
step = pm.Metropolis(vars=[p])
trace = pm.sample(40000, step=step)
burned_trace = trace[15000:]
figsize(12.5, 3)
p_trace = burned_trace["freq_cheating"][15000:]
plt.hist(p_trace, histtype="stepfilled", density=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.legend();
with pm.Model() as model:
p = pm.Uniform("freq_cheating", 0, 1)
p_skewed = pm.Deterministic("p_skewed", 0.5*p + 0.25)
with model:
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed, observed=35)
with model:
# To Be Explained in Chapter 3!
step = pm.Metropolis()
trace = pm.sample(25000, step=step)
burned_trace = trace[2500:]
figsize(12.5, 3)
p_trace = burned_trace["freq_cheating"]
plt.hist(p_trace, histtype="stepfilled", density=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend();
N = 10
x = np.ones(N, dtype=object)
with pm.Model() as model:
for i in range(0, N):
x[i] = pm.Exponential('x_%i' % i, (i+1.0)**2)
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt("data/challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
#drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
#plot it, as a function of tempature (the first column)
print("Temp (F), O-Ring failure?")
print(challenger_data)
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature");
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.legend();
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.legend(loc="lower left");
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1./_tau),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1./_tau), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
import pymc3 as pm
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
#notice the`value` here. We explain why below.
with pm.Model() as model:
beta = pm.Normal("beta", mu=0, tau=0.001, testval=0)
alpha = pm.Normal("alpha", mu=0, tau=0.001, testval=0)
p = pm.Deterministic("p", 1.0/(1. + tt.exp(beta*temperature + alpha)))
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
with model:
observed = pm.Bernoulli("bernoulli_obs", p, observed=D)
# Mysterious code to be explained in Chapter 3
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(120000, step=step, start=start)
burned_trace = trace[100000::2]
alpha_samples = burned_trace["alpha"][:, None] # best to make them 1d
beta_samples = burned_trace["beta"][:, None]
figsize(12.5, 6)
#histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\beta$", color="#7A68A6", density=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\alpha$", color="#A60628", density=True)
plt.legend();
t = np.linspace(temperature.min() - 5, temperature.max()+5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature");
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 2.5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.025, 0.975], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$");
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, density=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occurring in O-ring");
N = 10000
with pm.Model() as model:
beta = pm.Normal("beta", mu=0, tau=0.001, testval=0)
alpha = pm.Normal("alpha", mu=0, tau=0.001, testval=0)
p = pm.Deterministic("p", 1.0/(1. + tt.exp(beta*temperature + alpha)))
observed = pm.Bernoulli("bernoulli_obs", p, observed=D)
simulated = pm.Bernoulli("bernoulli_sim", p, shape=p.tag.test_value.shape)
step = pm.Metropolis(vars=[p])
trace = pm.sample(N, step=step)
figsize(12.5, 5)
simulations = trace["bernoulli_sim"]
print(simulations.shape)
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = plt.subplot(4, 1, i+1)
plt.scatter(temperature, simulations[1000*i, :], color="k",
s=50, alpha=0.6)
posterior_probability = simulations.mean(axis=0)
print("posterior prob of defect | realized defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[i], D[i]))
ix = np.argsort(posterior_probability)
print("probb | defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]]))
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7./23*np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model");
#type your code here.
figsize(12.5, 4)
plt.scatter(alpha_samples, beta_samples, alpha=0.1)
plt.title("Why does the plot look like this?")
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$");
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is an extra layer of convenience compared to PyMC. Any variables created within a given Model's context will be automatically assigned to that model. If you try to define a variable outside of the context of a model, you will get an error.
Step2: We can examine the same variables outside of the model context once they have been defined, but to define more variables that the model will recognize they have to be within the context.
Step3: Each variable assigned to a model will be defined with its own name, the first string parameter (we will cover this further in the variables section). To create a different model object with the same name as one we have used previously, we need only run the first block of code again.
Step4: We can also define an entirely separate model. Note that we are free to name our models whatever we like, so if we do not want to overwrite an old model we need only make another.
Step5: You probably noticed that PyMC3 will often give you notifications about transformations when you add variables to your model. These transformations are done internally by PyMC3 to modify the space that the variable is sampled in (when we get to actually sampling the model). This is an internal feature which helps with the convergence of our samples to the posterior distribution and serves to improve the results.
Step6: The test_value is used only for the model, as the starting point for sampling if no other start is specified. It will not change as a result of sampling. This initial state can be changed at variable creation by specifying a value for the testval parameter.
Step7: This can be helpful if you are using a more unstable prior that may require a better starting point.
Step8: If we want a deterministic variable to actually be tracked by our sampling, however, we need to define it explicitly as a named deterministic variable with the constructor.
Step9: Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable. We use the switch function here to change from $\lambda_1$ to $\lambda_2$ at the appropriate time. This function is directly from the theano package, which we will discuss in the next section.
Step10: Here we use theano's stack() function in the same way we would use one of NumPy's stacking functions
Step11: To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model.
Step12: This is how we include data into our models
Step13: Modeling approaches
Step14: 2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution
Step15: 3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example
Step16: 4. Plot the artificial dataset
Step17: It is okay that our fictional dataset does not look like our observed dataset
Step18: Later we will see how we use this to make predictions and test the appropriateness of our models.
Step19: Had we had stronger beliefs, we could have expressed them in the prior above.
Step20: The observed frequency is
Step21: We combine the observations into the PyMC3 observed variable, and run our inference algorithm
Step22: We plot the posterior distribution of the unknown $p_A$ below
Step23: Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.
Step24: Below we plot the posterior distributions for the three unknowns
Step25: Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
Step26: If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
Step27: The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
Step28: Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students
Step29: If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$
Step30: Although not everyone flips a second time, we can still model the possible realization of second coin-flips
Step31: Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC3 deterministic variable
Step32: The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.
Step33: Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expected to see approximately 3/4 of all responses be "Yes".
Step34: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step35: With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
Step36: I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
Step37: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step38: More PyMC3 Tricks
Step39: The remainder of this chapter examines some practical examples of PyMC3 and PyMC3 modeling
Step40: It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
Step41: But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function
Step42: Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Step43: A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter
Step44: We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like
Step45: We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$
Step46: All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
Step47: Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
Step48: The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
Step49: Is our model appropriate?
Step50: Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).
Step51: Next we sort each column by the posterior probabilities
Step52: We can present the above data better in a figure
Step53: The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions.
Step54: In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
Step55: References
|
1,368
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-1', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
1,369
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from scipy import stats as stats
import mne
from mne import spatial_src_adjacency
from mne.stats import spatio_temporal_cluster_test, summarize_clusters_stc
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
stc_fname = data_path + '/MEG/sample/sample_audvis-meg-lh.stc'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
# Load stc to in common cortical space (fsaverage)
stc = mne.read_source_estimate(stc_fname)
stc.resample(50, npad='auto')
# Read the source space we are morphing to
src = mne.read_source_spaces(src_fname)
fsave_vertices = [s['vertno'] for s in src]
morph = mne.compute_source_morph(stc, 'sample', 'fsaverage',
spacing=fsave_vertices, smooth=20,
subjects_dir=subjects_dir)
stc = morph.apply(stc)
n_vertices_fsave, n_times = stc.data.shape
tstep = stc.tstep * 1000 # convert to milliseconds
n_subjects1, n_subjects2 = 7, 9
print('Simulating data for %d and %d subjects.' % (n_subjects1, n_subjects2))
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X1 = np.random.randn(n_vertices_fsave, n_times, n_subjects1) * 10
X2 = np.random.randn(n_vertices_fsave, n_times, n_subjects2) * 10
X1[:, :, :] += stc.data[:, :, np.newaxis]
# make the activity bigger for the second set of subjects
X2[:, :, :] += 3 * stc.data[:, :, np.newaxis]
# We want to compare the overall activity levels for each subject
X1 = np.abs(X1) # only magnitude
X2 = np.abs(X2) # only magnitude
print('Computing adjacency.')
adjacency = spatial_src_adjacency(src)
# Note that X needs to be a list of multi-dimensional array of shape
# samples (subjects_k) x time x space, so we permute dimensions
X1 = np.transpose(X1, [2, 1, 0])
X2 = np.transpose(X2, [2, 1, 0])
X = [X1, X2]
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.0001
f_threshold = stats.distributions.f.ppf(1. - p_threshold / 2.,
n_subjects1 - 1, n_subjects2 - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu =\
spatio_temporal_cluster_test(X, adjacency=adjacency, n_jobs=1,
threshold=f_threshold, buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
fsave_vertices = [np.arange(10242), np.arange(10242)]
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A != condition B
brain = stc_all_cluster_vis.plot('fsaverage', hemi='both',
views='lateral', subjects_dir=subjects_dir,
time_label='temporal extent (ms)',
clim=dict(kind='value', lims=[0, 1, 40]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Compute statistic
Step3: Visualize the clusters
|
1,370
|
<ASSISTANT_TASK:>
Python Code:
import pymysql
import curl
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"sakila",
charset = "utf8",
)
customer_df = pd.read_sql("SELECT * FROM customer;", db)
rental_df = pd.read_sql("SELECT * FROM rental;", db)
df = rental_df.merge(customer_df, on="customer_id")
df.head(1)
rental_per_customer_groups = df.groupby("customer_id")
rental_per_customer_groups.get_group(1)
rental_per_customer_groups.agg({"customer_id": np.size})["customer_id"]
# rental_per_customer_groups.agg({"customer_id": np.size})["customer_id"] > 30
is_many_rentals_user = rental_per_customer_groups.size() > 30
is_many_rentals_user
#여기서는 어떤 문제로 되지 않는다. 다음에 알려주겠다.
# SQL로 하겠다
# 1. Sub Query - Query 안에 Query 가 들어있다.
SQL_QUERY =
SELECT
r.customer_id,
COUNT(*) rentals
FROM rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY r.customer_id
WHERE rentals > 30
pd.read_sql(SQL_QUERY, db)
#순서가 FROM -> group by -> where(=>rental이 없다.) -> select에서 카운트가 마지막이라서
# 1. Sub Query = Query 안에 Query가 들어있다.
SQL_QUERY =
SELECT rentals_per_customer.customer_id "Customer ID", rentals_per_customer.rentals
FROM (
SELECT
r.customer_id,
COUNT(*) rentals
FROM rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY r.customer_id
) AS rentals_per_customer
WHERE rentals > 30
;
pd.read_sql(SQL_QUERY, db)
# sub query스럽지 않아서 나눠 쓰면 보기가 좋다.
RENTALS_PER_CUSTOMER_SQL_QUERY =
SELECT
r.customer_id,
COUNT(*) rentals
FROM rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY r.customer_id
;
SQL_QUERY =
SELECT *
FROM (
{rentals_per_customer_sql_query}
) AS rentals_per_customer
WHERE rentals > 30
;
.format(
rentals_per_customer_sql_query=RENTALS_PER_CUSTOMER_SQL_QUERY.replace(";", "")
)
pd.read_sql(SQL_QUERY, db)
print(SQL_QUERY)
# 30번 이상인 애들의 => 이름, 이메일
RESULT_SQL_QUERY =
SELECT customer.last_name, customer.first_name, customer.email
FROM ({SQL_QUERY}) many_rental_user
JOIN customer ON many_rental_user.customer_id = customer.customer_id
;
.format(
SQL_QUERY=SQL_QUERY.replace(";", "")
)
pd.read_sql(RESULT_SQL_QUERY, db)
# Temporary Table ( 임시 테이블 )
SQL_QUERY =
DROP TEMPORARY TABLE IF EXISTS rentals_per_customer;
CREATE TEMPORARY TABLE rentals_per_customer
SELECT
r.customer_id,
COUNT(*) rentals
FROM rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY r.customer_id
;
# pd.read_sql() => 이걸로 실행시키면 오류가 난다. 그래서 cursor로 실행
cursor = db.cursor()
cursor.execute(SQL_QUERY)
SQL_QUERY =
SELECT rpc.customer_id, rpc.rentals
FROM rentals_per_customer rpc
WHERE rentals > 30
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT r.customer_id, COUNT(*) rentals
FROM rental r
JOIN customer c
ON r.customer_id = c.customer_id
GROUP BY r.customer_id
# WHERE rentals > 30 #연산에 대한 결과로 Filtering을 할 수 없다.
# 연산에 대한 결과로 Filtering을 할 수 있는 기능
HAVING rentals > 30
pd.read_sql(SQL_QUERY, db)
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"sakila",
charset="utf8"
)
film_df = pd.read_sql("SELECT * FROM film;", db)
rental_df = pd.read_sql("SELECT * FROM rental;", db)
payment_df = pd.read_sql("SELECT * FROM payment;", db)
inventory_df = pd.read_sql("SELECT * FROM inventory;", db)
SQL_QUERY =
SELECT f.film_id, f.title, SUM(p.amount) "revenue"
FROM
film f,
rental r,
payment p,
inventory i
WHERE
f.film_id = i.film_id
AND i.inventory_id = r.inventory_id
AND r.rental_id = p.rental_id
GROUP BY f.film_id
ORDER BY revenue DESC
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT f.film_id, f.title, SUM(p.amount) "revenue"
FROM payment p
JOIN rental r ON p.rental_id = r.rental_id
JOIN inventory i ON i.inventory_id = r.inventory_id
JOIN film f ON f.film_id = i.film_id
GROUP BY f.film_id
ORDER BY revenue DESC
;
pd.read_sql(SQL_QUERY, db)
customer_df = pd.read_sql("SELECT * FROM customer;", db)
customer_df.head(1)
payment_df.head(1)
SQL_QUERY =
SELECT c.first_name, c.last_name, SUM(p.amount) "revenue"
FROM customer c
JOIN payment p
ON c.customer_id = p.customer_id
GROUP BY c.customer_id
ORDER BY revenue DESC
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT a.first_name, a.last_name, SUM(p.amount) "revenue"
FROM
actor a,
film_actor fa,
inventory i,
rental r,
payment p
WHERE
a.actor_id = fa.actor_id
AND fa.film_id = i.film_id
AND i.inventory_id = r.inventory_id
AND r.rental_id = p.rental_id
GROUP BY a.actor_id
ORDER BY revenue DESC
;
pd.read_sql(SQL_QUERY, db)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step7: 4T_Pandas로 배우는 SQL 시작하기 (4) - HAVING, SUB QUERY
Step9: HAVING
Step10: 실습)
Step13: film_df => film_id, title
Step15: 실습 추가)
Step17: 영화를 흥행시킨 ( 매출이 많이 발생한 ) 배우 상위 10명
|
1,371
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import getpass
import pdvega
# for configuring connection
from configobj import ConfigObj
import os
%matplotlib inline
# Create a database connection using settings from config file
config='../db/config.ini'
# connection info
conn_info = dict()
if os.path.isfile(config):
config = ConfigObj(config)
conn_info["sqluser"] = config['username']
conn_info["sqlpass"] = config['password']
conn_info["sqlhost"] = config['host']
conn_info["sqlport"] = config['port']
conn_info["dbname"] = config['dbname']
conn_info["schema_name"] = config['schema_name']
else:
conn_info["sqluser"] = 'postgres'
conn_info["sqlpass"] = ''
conn_info["sqlhost"] = 'localhost'
conn_info["sqlport"] = 5432
conn_info["dbname"] = 'eicu'
conn_info["schema_name"] = 'public,eicu_crd'
# Connect to the eICU database
print('Database: {}'.format(conn_info['dbname']))
print('Username: {}'.format(conn_info["sqluser"]))
if conn_info["sqlpass"] == '':
# try connecting without password, i.e. peer or OS authentication
try:
if (conn_info["sqlhost"] == 'localhost') & (conn_info["sqlport"]=='5432'):
con = psycopg2.connect(dbname=conn_info["dbname"],
user=conn_info["sqluser"])
else:
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"])
except:
conn_info["sqlpass"] = getpass.getpass('Password: ')
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"],
password=conn_info["sqlpass"])
query_schema = 'set search_path to ' + conn_info['schema_name'] + ';'
patientunitstayid = 242380
query = query_schema +
select *
from infusiondrug
where patientunitstayid = {}
order by infusionoffset
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df.head()
query = query_schema +
select
pt.hospitalid
, count(distinct pt.patientunitstayid) as number_of_patients
, count(distinct d.patientunitstayid) as number_of_patients_with_tbl
from patient pt
left join infusiondrug d
on pt.patientunitstayid = d.patientunitstayid
group by pt.hospitalid
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0
df.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True)
df.head(n=10)
df[['data completion']].vgplot.hist(bins=10,
var_name='Number of hospitals',
value_name='Percent of patients with data')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Examine a single patient
Step4: Above we can see a patient is admitted on (or quickly administered after admission) two drugs
|
1,372
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, decomposition, datasets
from sklearn.metrics import accuracy_score
digits = datasets.load_digits()
X_digits = digits.data
y_digits = digits.target
# http://scikit-learn.org/stable/auto_examples/plot_digits_pipe.html#example-plot-digits-pipe-py
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, decomposition, datasets
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
logistic = linear_model.LogisticRegression()
pca = decomposition.PCA()
pipe = Pipeline(steps=[('pca', pca), ('logistic', logistic)])
digits = datasets.load_digits()
X_digits = digits.data
y_digits = digits.target
###############################################################################
# Plot the PCA spectrum
pca.fit(X_digits)
fig, ax = plt.subplots(1,1)
ax.plot(pca.explained_variance_, linewidth=2)
ax.set_xlabel('n_components')
ax.set_ylabel('explained_variance_')
###############################################################################
# Prediction
n_components = [20, 40, 64]
Cs = np.logspace(-4, 4, 10)
#Parameters of pipelines can be set using ‘__’ separated parameter names:
estimator = GridSearchCV(pipe,
dict(pca__n_components=n_components,
logistic__C=Cs))
estimator.fit(X_digits, y_digits)
print('# components:', estimator.best_estimator_.named_steps['pca'].n_components)
print('C:', estimator.best_estimator_.named_steps['logistic'].C)
print(estimator)
# http://scikit-learn.org/stable/auto_examples/feature_stacker.html#example-feature-stacker-py
# Author: Andreas Mueller <amueller@ais.uni-bonn.de>
#
# License: BSD 3 clause
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
iris = load_iris()
X, y = iris.data, iris.target
# This dataset is way to high-dimensional. Better do PCA:
pca = PCA(n_components=2)
# Maybe some original features where good, too?
selection = SelectKBest(k=1)
# Build estimator from PCA and Univariate selection:
combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])
# Use combined features to transform dataset:
X_features = combined_features.fit(X, y).transform(X)
svm = SVC(kernel="linear")
# Do grid search over k, n_components and C:
pipeline = Pipeline([("features", combined_features), ("svm", svm)])
param_grid = dict(features__pca__n_components=[1, 2, 3],
features__univ_select__k=[1, 2],
svm__C=[0.1, 1, 10])
grid_search = GridSearchCV(pipeline, param_grid=param_grid)
grid_search.fit(X, y)
print('PCA components:', grid_search.best_estimator_.named_steps['features'].get_params()['pca'].n_components)
print('Original features used:', grid_search.best_estimator_.named_steps['features'].get_params()['univ_select'].k)
print('C:', grid_search.best_estimator_.named_steps['svm'].C)
print(grid_search.best_estimator_)
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train',
categories=['comp.graphics', 'sci.med'], shuffle=True, random_state=0)
print(twenty_train.target_names)
import numpy as np
from sklearn.base import TransformerMixin
class ModelTransformer(TransformerMixin):
Wrap a classifier model so that it can be used in a pipeline
def __init__(self, model):
self.model = model
def fit(self, *args, **kwargs):
self.model.fit(*args, **kwargs)
return self
def transform(self, X, **transform_params):
return self.model.predict_proba(X)
def predict_proba(self, X, **transform_params):
return self.transform(X, **transform_params)
class VarTransformer(TransformerMixin):
Compute the variance
def transform(self, X, **transform_params):
var = X.var(axis=1)
return var.reshape((var.shape[0],1))
def fit(self, X, y=None, **fit_params):
return self
class MedianTransformer(TransformerMixin):
Compute the median
def transform(self, X, **transform_params):
median = np.median(X, axis=1)
return median.reshape((median.shape[0],1))
def fit(self, X, y=None, **fit_params):
return self
class ChannelExtractor(TransformerMixin):
Extract a single channel for downstream processing
def __init__(self, channel):
self.channel = channel
def transform(self, X, **transformer_params):
return X[:,:,self.channel]
def fit(self, X, y=None, **fit_params):
return self
class FFTTransformer(TransformerMixin):
Convert to the frequency domain and then sum over bins
def transform(self, X, **transformer_params):
fft = np.fft.rfft(X, axis=1)
fft = np.abs(fft)
fft = np.cumsum(fft, axis=1)
bin_size = 10
max_freq = 60
return np.column_stack([fft[:,i] - fft[:,i-bin_size]
for i in range(bin_size, max_freq, bin_size)])
def fit(self, X, y=None, **fit_params):
return self
This cell is not expected to run correctly. We don't have all the packages needed.
If you want to run this example download the repository and the source data.
import numpy as np
import os
import pickle
from sklearn.cross_validation import cross_val_score, StratifiedShuffleSplit
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.ensemble import RandomForestClassifier
import get_traces
import transformers as trans
def build_pipeline(X):
Helper function to build the pipeline of feature transformations.
We do the same thing to each channel so rather than manually copying changes
for all channels this is automatically generated
channels = X.shape[2]
pipeline = Pipeline([
('features', FeatureUnion([
('select_%d_pipeline' % i,
Pipeline([('select_%d' % i, trans.ChannelExtractor(i)),
('channel features', FeatureUnion([
('var', trans.VarTransformer()),
('median', trans.MedianTransformer()),
('fft', trans.FFTTransformer()),
])),
])
) for i in range(channels)])),
('classifier', trans.ModelTransformer(RandomForestClassifier(
n_estimators=500,
max_depth=None,
min_samples_split=1,
random_state=0))),
])
return pipeline
def get_transformed_data(patient, func=get_traces.get_training_traces):
Load in all the data
X = []
channels = get_traces.get_num_traces(patient)
# Reading in 43 Gb of data . . .
for i in range(channels):
x, y = func(patient, i)
X.append(x)
return (np.dstack(X), y)
all_labels = []
all_predictions = np.array([])
folders = [i for i in os.listdir(get_traces.directory) if i[0] != '.']
folders.sort()
for folder in folders:
print('Starting %s' % folder)
print('getting data')
X, y = get_transformed_data(folder)
print(X.shape)
print('stratifiedshufflesplit')
cv = StratifiedShuffleSplit(y,
n_iter=5,
test_size=0.2,
random_state=0,)
print('cross_val_score')
pipeline = build_pipeline(X)
# Putting this in a list is unnecessary for just one pipeline - use to compare multiple pipelines
scores = [
cross_val_score(pipeline, X, y, cv=cv, scoring='roc_auc')
]
print('displaying results')
for score, label in zip(scores, ['pipeline',]):
print("AUC: {:.2%} (+/- {:.2%}), {:}".format(score.mean(),
score.std(), label))
clf = pipeline
print('Fitting full model')
clf.fit(X, y)
print('Getting test data')
testing_data, files = get_transformed_data(folder,
get_traces.get_testing_traces)
print('Generating predictions')
predictions = clf.predict_proba(testing_data)
print(predictions.shape, len(files))
with open('%s_randomforest_predictions.pkl' % folder, 'wb') as f:
pickle.dump((files, predictions[:,1]), f)
from sklearn import datasets
diabetes = datasets.load_diabetes()
# Description at http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html
# Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements
# were obtained for each of n = 442 diabetes patients,
# as well as the response of interest, a quantitative measure of disease progression one year after baseline.
X = diabetes.data # independent variables
y = diabetes.target # dependent val
print(X.shape)
print(y.shape)
import pandas as pd
data = pd.DataFrame(X, columns=['age', 'sex', 'bmi', 'map',
'tc', 'ldl', 'hdl', 'tch', 'ltg', 'glu'])
data.info()
from sklearn import linear_model
bmi = X[:, 2].reshape(-1, 1)
outcome = y
reg = linear_model.LinearRegression()
reg.fit(bmi, outcome)
predicted_outcome = reg.predict(bmi)
plt.plot(predicted_outcome, outcome, 'k.')
plt.xlabel("Predicted outcome")
plt.ylabel("Clinical outcome")
print('Directly trained model predictions:', predicted_outcome[:10])
from sklearn.externals import joblib
joblib.dump(reg, 'diabetes_prediction_model.pkl')
reg2 = joblib.load('diabetes_prediction_model.pkl')
predicted_outcome2 = reg2.predict(bmi)
print('Saved model predictions:', predicted_outcome[:10])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In previous weeks we have covered preprocessing our data, dimensionality reduction, clustering, regression and classification. This week we will be pulling these processes together into a complete project.
Step2: Scikit learn includes functionality for structuring our code and easily exploring the impact of different parameters not only in the machine learning algorithm we choose but at every stage of our solution.
Step3: FeatureUnion
Step4: Text classification
Step13: Advanced Pipeline
Step14: Saving a model
|
1,373
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import numpy as np
import tensorflow as tf
import scipy.ndimage as nd
import time
import imageio
import matplotlib
import matplotlib.pyplot as plt
import lucid.modelzoo.vision_models as models
from lucid.misc.io import show
import lucid.optvis.objectives as objectives
import lucid.optvis.param as param
import lucid.optvis.render as render
import lucid.optvis.transform as transform
from lucid.optvis.objectives import wrap_objective, _dot, _dot_cossim
from lucid.optvis.transform import standard_transforms, crop_or_pad_to, pad, jitter, random_scale, random_rotate
from lucid.modelzoo.vision_base import Model
class DLA(Model):
model_path = '../protobufs/full_model_8_13.pb'
image_shape = [1, 400]
image_value_range = [0, 1]
input_name = 'x'
model = DLA()
model.load_graphdef()
LAYERS = { 'conv1': ['Conv2D', 100],
'conv1_relu': ['Relu', 100],
'pool1': ['MaxPool', 100],
'conv2': ['Conv2D_1', 96],
'conv2_relu': ['Relu_1', 96],
'pool2': ['MaxPool_1', 96],
'conv3': ['Conv2D_2', 96],
'conv3_relu': ['Relu_2', 96],
'pool3': ['MaxPool_2', 96]}
# Specify param.image size to work with our models input, must be a multiple of 400.
param_f = lambda: param.image(120, h=120, channels=3)
# std_transforms = [
# pad(2, mode="constant", constant_value=.5),
# jitter(2)]
# transforms = std_transforms + [crop_or_pad_to(*model.image_shape[:2])]
transforms = []
# Specify the objective
# neuron = lambda n: objectives.neuron(LAYERS['pool1'][0], n)
# obj = neuron(0)
channel = lambda n: objectives.channel(LAYERS['pool1'][0], n)
obj = channel(0)
# Specify the number of optimzation steps, will output image at each step
thresholds = (1, 2, 4, 8, 16, 32, 64, 128, 256, 512)
# Render the objevtive
imgs = render.render_vis(model, obj, param_f, thresholds=thresholds, transforms=transforms)
show([nd.zoom(img[0], [1,1,1], order=0) for img in imgs])
# test = np.array(imgs)
# test = test.reshape(400)
# test = test[0:400:1]
# fig = plt.figure(frameon=False);
# ax = plt.Axes(fig, [0, 0, 1, 1]);
# ax.set_axis_off();
# fig.add_axes(ax);
# ax.plot(test, 'black');
# ax.set(xlim=(0, 400));
# ax.set(ylim=(0,1))
# Specify param.image size
param_f = lambda: param.image(400, h=1, channels=1)
transforms = []
# Specify the objective
# neuron = lambda n: objectives.neuron(LAYERS['pool1'][0], n)
# obj = neuron(0)
channel = lambda n: objectives.channel(LAYERS['pool1'][0], n)
obj = channel(0)
# Specify the number of optimzation steps,
thresholds = (128,)
# Render the objevtive
imgs = render.render_vis(model, obj, param_f, thresholds=thresholds, transforms=transforms, verbose=False)
# Display visualization
test = np.array(imgs)
test = test.reshape(400)
test = test[0:400:1]
fig = plt.figure(frameon=False);
ax = plt.Axes(fig, [0, 0, 1, 1]);
ax.set_axis_off();
fig.add_axes(ax);
ax.plot(test, 'black');
ax.set(xlim=(0, 400));
ax.set(ylim=(0,1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple 3D Visualizations of a neuron
Step2: Simple 1D visualizations
|
1,374
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print (accuracy_score(outcomes[:5], predictions))
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
print (accuracy_score(outcomes, predictions))
vs.survival_stats(data, outcomes, 'Sex')
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append(1 if passenger['Sex'] == 'female' else 0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
print (accuracy_score(outcomes, predictions))
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predicting_data = 0
predicting_data = 1 if passenger['Sex'] == 'female' else predicting_data
predicting_data = 1 if passenger['Sex'] == 'male' and passenger['Age'] < 10 else predicting_data
predictions.append(predicting_data)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
print (accuracy_score(outcomes, predictions))
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"])
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predicting_data = 0
predicting_data = 1 if passenger['Sex'] == 'female' else predicting_data
predicting_data = 1 if passenger['Sex'] == 'male' and passenger['Age'] < 10 else predicting_data
predictions.append(predicting_data)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print (accuracy_score(outcomes, predictions))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
Step5: Tip
Step6: Question 1
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Step18: Question 4
|
1,375
|
<ASSISTANT_TASK:>
Python Code:
url = "scipy-user"
arx = Archive(url,archive_dir="../archives")
arx.data[:1]
act = arx.get_activity()
fig = plt.figure(figsize=(12.5, 7.5))
#act.idxmax().order().T.plot()
(act > 0).idxmax().order().plot()
fig.axes[0].yaxis_date()
fig = plt.figure(figsize=(12.5, 7.5))
(act > 0).idxmax().order().hist()
fig.axes[0].xaxis_date()
n = 5
from bigbang import plot
# A series, indexed by users, of the day of their first post
# This series is ordered by time
first_post = (act > 0).idxmax().order()
# Splitting the previous series into five equal parts,
# each representing a chronological quintile of list members
cohorts = np.array_split(first_post,n)
cohorts = [c.keys() for c in cohorts]
plot.stack(act,partition=cohorts,smooth=10)
cohorts[1].index.values
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Archive objects have a method that reports for each user how many emails they sent each day.
Step2: This plot will show when each sender sent their first post. A slow ascent means a period where many people joined.
Step3: This is the same data, but plotted as a histogram. It's easier to see the trends here.
Step4: While this is interesting, what if we are interested in how much different "cohorts" of participants stick around and continue to participate in the community over time?
Step5: This gives us a sense of when new members are taking the lead in the community. But what if the old members are just changing their email addresses? To test that case, we should clean our data with entity resolution techniques.
|
1,376
|
<ASSISTANT_TASK:>
Python Code:
# Basic setup
import qutip
import numpy as np
size = 4
t = 1.0
a = qutip.destroy(size)
ad = qutip.create(size)
n = qutip.num(size)
I = qutip.qeye(size)
constant_form = qutip.QobjEvo([n])
def cos_t(t, args):
return np.cos(t)
function_form = qutip.QobjEvo([n, [a+ad, cos_t]])
class callable_time_dependence:
def __init__(self, add):
self.add = add
def __call__(self, t, args):
return self.add + np.cos(t)
callable_form = qutip.QobjEvo([n, [a+ad, callable_time_dependence(2)]])
string_form = qutip.QobjEvo([n, [a+ad, "cos(t)"]])
tlist = np.linspace(0, 10, 101)
values = np.cos(tlist)
array_form = qutip.QobjEvo([n, [a+ad, values]], tlist=tlist)
constant_form(2)
function_form(2)
callable_form(2)
string_form(2)
array_form(2)
string_form.compile()
string_form(4)
def coeff_with_args(t, args):
return t + args['delta']
td_args = qutip.QobjEvo([I, coeff_with_args], args={'delta': 1.})
td_args(2)
# Temporarily overriding the arguments.
td_args(2, args={"delta": 10})
# A subsequent regular call will still use the args given at initialisation.
td_args(2)
td_args_str = qutip.QobjEvo([I, "t + delta"], args={"delta": 1.})
td_args_str(2)
td_args_str(2, args={"delta": 10})
td_args_str = qutip.QobjEvo([I, "f(t)"], args={'f': np.cos})
td_args_str.compile()
td_args_str(0.)
td_args_str(np.pi)
args = {"state": None}
def print_args(t, args):
print("\n".join([
'"' + key + '":\n' + repr(value)
for key, value in args.items()
]))
return t
td_args = qutip.QobjEvo([I, print_args], args=args)
td_args(0)
# The `state` keyword argument is typically unused.
# Here it just simulates being inside a solver at a particular state.
td_args(0, state=qutip.basis(4,2))
(array_form * 2)(0)
(array_form + 1 + a)(0)
liouv = qutip.liouvillian(array_form, c_ops=[constant_form])
liouv(0)
qutip.about()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Contents
Step2: Function time dependence
Step3: If you need something more complex, such as a state with memory or to build a parametrised set of functions where the arguments will not change once set, you can use a class which implements __call__.
Step4: String time dependence
Step5: Array dependence
Step6: Evaluation
Step7: Compilation
Step8: Arguments
Step9: String form
Step10: Using objects
Step11: Dynamic arguments
Step12: Mathematics
Step13: Superoperators
Step14: Epilogue
|
1,377
|
<ASSISTANT_TASK:>
Python Code:
#A fast matrix-by-vector product demo
import numpy as np
n = 1000
r = 10
u = np.random.randn(n, r)
v = np.random.randn(n, r)
a = u.dot(v.T)
x = np.random.randn(n)
%timeit a.dot(x)
%timeit u.dot(v.T.dot(x))
#Computing matrix rank
import numpy as np
n = 50
a = np.ones((n, n))
print 'Rank of the matrix:', np.linalg.matrix_rank(a)
b = a + 1e-5 * np.random.randn(n, n)
print 'Rank of the matrix:', np.linalg.matrix_rank(b, tol=1e-8)
#Computing matrix rank
import numpy as np
n = 50
a = np.ones((n, n))
print 'Rank of the matrix:', np.linalg.matrix_rank(a)
b = a + 1e-5 * np.random.randn(n, n)
print 'Rank of the matrix:', np.linalg.matrix_rank(b)
u, s, v = np.linalg.svd(b)
print s[1]/s[0]
r = 1
u1 = u[:, :r]
s1 = s[:r]
v1 = v[:r, :]
a1 = u1.dot(np.diag(s1).dot(v1))
print np.linalg.norm(b - a1, 2)/s[0]
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.xkcd()
n = 1000
a = [[1.0/(i+j+1) for i in xrange(n)] for j in xrange(n)] #Hilbert matrix
a = np.array(a)
u, s, v = np.linalg.svd(a)
fig, ax = plt.subplots(1, 1)
ax.semilogy(s[:30]/s[0])
#We have very good low-rank approximation of it!
import numpy as np
plt.xkcd()
n = 128
t = np.linspace(0, 5, n)
x, y = np.meshgrid(t, t)
f = 1.0 / (x + y + 0.5) # test your own function. Check 1.0 / (x - y + 0.5)
u, s, v = np.linalg.svd(f, full_matrices=False)
r = 10
u = u[:, :r]
s = s[:r]
v = v[:r, :] # Mind the transpose here!
fappr = u.dot(np.diag(s).dot(v))
er = np.linalg.norm(fappr - f, 'fro') / np.linalg.norm(f, 'fro')
print er
plt.semilogy(s/s[0])
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
plt.xkcd()
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(121, projection='3d')
ax.plot_surface(x, y, f)
ax.set_title('Original function')
ax = fig.add_subplot(122, projection='3d')
ax.plot_surface(x, y, fappr - f)
ax.set_title('Approximation error with rank=%d, err=%3.1e' % (r, er))
fig.subplots_adjust()
fig.tight_layout()
import numpy as np
import matplotlib.pyplot as plt
plt.xkcd()
n = 1000
a = np.random.randn(n, n)
u, s, v = np.linalg.svd(a)
plt.semilogy(s/s[0])
import numpy as np
import pandas as pd
import re #Regular expressions
rows = ['human', 'interface', 'computer', 'user', 'system', 'response', 'time', 'EPS', 'survey', 'trees', 'graph', 'minors']
nterms = len(rows)
docs = []
docs += ['Human machine interface for Lab ABC computer applications']
docs += ['A survey of user opinions of computer system response time']
docs += ['The EPS user interfaces management system']
docs += ['System and human system engineering testing of EPS']
docs += ['Relation of user-perceived response time on user management']
docs += ['The generation of random, binary, unordered trees']
docs += ['The intersection graph of paths in trees']
docs += ['Graph minors IV: Width of trees and well-quasi-ordering']
docs += ['Graph minors: A survey']
ndocs = len(docs)
term_doc = np.zeros((nterms, ndocs))
for i in xrange(nterms):
for j in xrange(ndocs):
if re.search(rows[i], docs[j], re.IGNORECASE):
term_doc[i, j] = 1
#Use pandas to plot
pd.DataFrame(data=term_doc,index=rows)
query = 'Human computer interaction'
qv = np.zeros((nterms))
for i in xrange(nterms):
if re.search(rows[i], query, re.IGNORECASE):
qv[i] = 1
res1 = qv.dot(term_doc) #Non-compressed search result
u, s, v = np.linalg.svd(term_doc)
r = 2
u = u[:, :r]
s = s[:r]
v = v[:r, :] #Numpy transposes
appr1 = u.dot(np.diag(s)).dot(v)
res2 = qv.dot(appr1)
res_all = np.vstack((res1, res2)).T #To make two columns
print 'There query is:', query, ',the scores are:'
pd.DataFrame(res_all, index=docs, columns=['No SVD', 'SVD'])
#Read the data
data_read = np.loadtxt('task2_transact_eval.txt',dtype=np.int32, skiprows=1,delimiter='|')
columns = ['SessionNo', 'ItemNo', 'TransType']
ds = pd.DataFrame(data=data_read,columns=columns)
from itertools import product
ds_buy = ds[ds.TransType==2][['SessionNo', 'ItemNo']]
users, inv_users = np.unique(data_read[:, 0], return_inverse=True)
products, inv_products = np.unique(data_read[:, 1], return_inverse=True)
print 'Unique sessions:', len(users), 'Unique products', len(products)
#Scipy spars
import scipy.sparse
import scipy.sparse.linalg
import matplotlib.pyplot as plt
%matplotlib inline
plt.xkcd()
nnz = len(inv_users)
spmat = scipy.sparse.coo_matrix((np.ones(nnz), (inv_users, inv_products)))
r = 2#Approximation rank
u, s, v = scipy.sparse.linalg.svds(spmat, r)
n = spmat.shape[0]
m = spmat.shape[1]
q = np.zeros(n)
user_id = 1
q[user_id] = 1.0
qrec = q.T.dot(u).dot(np.diag(s)).dot(v)
qrec = qrec / np.max(abs(qrec))
plt.plot(qrec)
qb = spmat.T.dot(q)
plt.plot(qb)
plt.xlabel('Product ID')
plt.ylabel('Recommendation')
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
plt.xkcd()
n = 256
a = [[1.0/(i + j + 0.5) for i in range(n)] for j in range(n)]
a = np.array(a)
u, s, v = np.linalg.svd(a)
plt.semilogy(s/s[0])
plt.title('Singular values decay for a Hilbert matrix')
import numpy as np
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
plt.xkcd()
n = 256
a = [[1.0/(i - j + 0.5) for i in xrange(n)] for j in xrange(n)]
a = np.array(a)
u, s, v = np.linalg.svd(a)
plt.plot(s)
plt.title('Singular values decay for a Cauchy-Hilbert matrix')
plt.tight_layout()
s[5] - np.pi
from IPython.core.display import HTML
def css_styling():
styles = open("./styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Computing matrix rank
Step2: So, small perturbations might crucially affect the rank.
Step3: Separation of variables for 2D functions
Step4: Now we can do something more interesting, like function approximation
Step5: And do some 3D plotting
Step6: Singular values of a random Gaussian matrix
Step7: Linear factor analysis & low-rank
Step8: Now we can compare the results between ordinary matvec and low-rank matvec.
Step9: Application 2
Step10: Application 3
Step11: A more realistic example
Step12: What to do?
|
1,378
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
class Net(tf.keras.Model):
A simple linear model.
def __init__(self):
super(Net, self).__init__()
self.l1 = tf.keras.layers.Dense(5)
def call(self, x):
return self.l1(x)
net = Net()
net.save_weights('easy_checkpoint')
def toy_dataset():
inputs = tf.range(10.)[:, None]
labels = inputs * 5. + tf.range(5.)[None, :]
return tf.data.Dataset.from_tensor_slices(
dict(x=inputs, y=labels)).repeat().batch(2)
def train_step(net, example, optimizer):
Trains `net` on `example` using `optimizer`.
with tf.GradientTape() as tape:
output = net(example['x'])
loss = tf.reduce_mean(tf.abs(output - example['y']))
variables = net.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
opt = tf.keras.optimizers.Adam(0.1)
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
def train_and_checkpoint(net, manager):
ckpt.restore(manager.latest_checkpoint)
if manager.latest_checkpoint:
print("Restored from {}".format(manager.latest_checkpoint))
else:
print("Initializing from scratch.")
for _ in range(50):
example = next(iterator)
loss = train_step(net, example, opt)
ckpt.step.assign_add(1)
if int(ckpt.step) % 10 == 0:
save_path = manager.save()
print("Saved checkpoint for step {}: {}".format(int(ckpt.step), save_path))
print("loss {:1.2f}".format(loss.numpy()))
train_and_checkpoint(net, manager)
opt = tf.keras.optimizers.Adam(0.1)
net = Net()
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
train_and_checkpoint(net, manager)
print(manager.checkpoints) # List the three remaining checkpoints
!ls ./tf_ckpts
to_restore = tf.Variable(tf.zeros([5]))
print(to_restore.numpy()) # All zeros
fake_layer = tf.train.Checkpoint(bias=to_restore)
fake_net = tf.train.Checkpoint(l1=fake_layer)
new_root = tf.train.Checkpoint(net=fake_net)
status = new_root.restore(tf.train.latest_checkpoint('./tf_ckpts/'))
print(to_restore.numpy()) # We get the restored value now
status.assert_existing_objects_matched()
deferred_restore = tf.Variable(tf.zeros([1, 5]))
print(deferred_restore.numpy()) # Not restored; still zeros
fake_layer.kernel = deferred_restore
print(deferred_restore.numpy()) # Restored
tf.train.list_variables(tf.train.latest_checkpoint('./tf_ckpts/'))
key = 'net/l1/kernel/.ATTRIBUTES/VARIABLE_VALUE'
print("Shape:", shape_from_key[key])
print("Dtype:", dtype_from_key[key].name)
reader.get_tensor(key)
save = tf.train.Checkpoint()
save.listed = [tf.Variable(1.)]
save.listed.append(tf.Variable(2.))
save.mapped = {'one': save.listed[0]}
save.mapped['two'] = save.listed[1]
save_path = save.save('./tf_list_example')
restore = tf.train.Checkpoint()
v2 = tf.Variable(0.)
assert 0. == v2.numpy() # Not restored yet
restore.mapped = {'two': v2}
restore.restore(save_path)
assert 2. == v2.numpy()
restore.listed = []
print(restore.listed) # ListWrapper([])
v1 = tf.Variable(0.)
restore.listed.append(v1) # Restores v1, from restore() in the previous cell
assert 1. == v1.numpy()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: トレーニングのチェックポイント
Step3: tf.kerasトレーニング API から保存する
Step5: チェックポイントを記述する
Step6: チェックポイントオブジェクトを作成する
Step7: モデルをトレーニングおよびチェックポイントする
Step8: 復元してトレーニングを続ける
Step9: tf.train.CheckpointManagerオブジェクトは古いチェックポイントを削除します。上記では、最新の 3 つのチェックポイントのみを保持するように構成されています。
Step10: これらのパス、例えば'./tf_ckpts/ckpt-10'などは、ディスク上のファイルではなく、indexファイルのプレフィックスで、変数値を含む 1 つまたはそれ以上のデータファイルです。これらのプレフィックスは、まとめて単一のcheckpointファイル('./tf_ckpts/checkpoint')にグループ化され、CheckpointManagerがその状態を保存します。
Step11: <a id="loading_mechanics"></a>
Step12: これらの新しいオブジェクトの依存関係グラフは、上で書いたより大きなチェックポイントのはるかに小さなサブグラフです。 これには、バイアスと tf.train.Checkpoint がチェックポイントに番号付けするために使用する保存カウンタのみが含まれます。
Step13: チェックポイントには、レイヤーのカーネルやオプティマイザの変数など、一致しない多くのオブジェクトがあります。status.assert_consumed() は、チェックポイントとプログラムが正確に一致する場合に限りパスするため、ここでは例外がスローされます。
Step14: チェックポイントを手動で検査する
Step15: net.l1.kernel の値に関心がある場合は、次のコードを使って値を取得できます。
Step16: また、変数の値を検査できるようにする get_tensor メソッドも提供されています。
Step17: オブジェクトの追跡
Step18: リストとディクショナリのラッパーオブジェクトにお気づきでしょうか。これらのラッパーは基礎的なデータ構造のチェックポイント可能なバージョンです。属性に基づく読み込みと同様に、これらのラッパーは変数の値がコンテナに追加されるとすぐにそれを復元します。
|
1,379
|
<ASSISTANT_TASK:>
Python Code:
%run ../stack/stack.py
%load ../stack/stack.py
import sys
class MyStack(Stack):
def __init__(self, top=None):
# TODO: Implement me
pass
def min(self):
# TODO: Implement me
pass
def push(self, data):
# TODO: Implement me
pass
def pop(self):
# TODO: Implement me
pass
# %load test_stack_min.py
from nose.tools import assert_equal
class TestStackMin(object):
def test_stack_min(self):
print('Test: Push on empty stack, non-empty stack')
stack = MyStack()
stack.push(5)
assert_equal(stack.peek(), 5)
assert_equal(stack.min(), 5)
stack.push(1)
assert_equal(stack.peek(), 1)
assert_equal(stack.min(), 1)
stack.push(3)
assert_equal(stack.peek(), 3)
assert_equal(stack.min(), 1)
stack.push(0)
assert_equal(stack.peek(), 0)
assert_equal(stack.min(), 0)
print('Test: Pop on non-empty stack')
assert_equal(stack.pop(), 0)
assert_equal(stack.min(), 1)
assert_equal(stack.pop(), 3)
assert_equal(stack.min(), 1)
assert_equal(stack.pop(), 1)
assert_equal(stack.min(), 5)
assert_equal(stack.pop(), 5)
assert_equal(stack.min(), sys.maxsize)
print('Test: Pop empty stack')
assert_equal(stack.pop(), None)
print('Success: test_stack_min')
def main():
test = TestStackMin()
test.test_stack_min()
if __name__ == '__main__':
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unit Test
|
1,380
|
<ASSISTANT_TASK:>
Python Code:
G = generators.ErdosRenyiGenerator(300, 0.2).generate()
G.addEdge(0, 1) #We want to make sure this specific edge exists, for usage in an example later.
client = gephi.streaming.GephiStreamingClient()
client.exportGraph(G)
communities = community.detectCommunities(G)
client.exportNodeValues(G, communities, "community")
edgeScore = [2*x for x in range(0, G.upperEdgeIdBound())]
client.exportEdgeValues(G, edgeScore, "myEdgeScore")
client = gephi.streaming.GephiStreamingClient(url='http://localhost:8080/workspace0')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ... and export it directly export it into the active gephi workspace. After executing the following code, the graph should be available in the first gephi workspace. Attention
Step2: Exporting node values
Step3: The node attribute can now be selected and used in gephi, for partitioning or any other desired scenario.
Step4: Changing the server URL
|
1,381
|
<ASSISTANT_TASK:>
Python Code:
import re, math, random # regexes, math functions, random numbers
import matplotlib.pyplot as plt # pyplot
from collections import defaultdict, Counter
from functools import partial, reduce
v = [1, 2]
w = [2, 1]
vectors = [v, w]
def vector_add(v, w):
adds two vectors componentwise
return [v_i + w_i for v_i, w_i in zip(v,w)]
vector_add(v, w)
def vector_subtract(v, w):
subtracts two vectors componentwise
return [v_i - w_i for v_i, w_i in zip(v,w)]
vector_subtract(v, w)
def vector_sum(vectors):
return reduce(vector_add, vectors)
vector_sum(vectors)
def scalar_multiply(c, v):
# c is a number, v is a vector
return [c * v_i for v_i in v]
scalar_multiply(2.5, v)
def vector_mean(vectors):
compute the vector whose i-th element is the mean of the
i-th elements of the input vectors
n = len(vectors)
return scalar_multiply(1/n, vector_sum(vectors))
vector_mean(vectors)
def dot(v, w):
v_1 * w_1 + ... + v_n * w_n
return sum(v_i * w_i for v_i, w_i in zip(v, w))
dot(v, w)
def sum_of_squares(v):
v_1 * v_1 + ... + v_n * v_n
return dot(v, v)
sum_of_squares(v)
def magnitude(v):
return math.sqrt(sum_of_squares(v))
magnitude(v)
def squared_distance(v, w):
return sum_of_squares(vector_subtract(v, w))
squared_distance(v, w)
def distance(v, w):
return math.sqrt(squared_distance(v, w))
distance(v, w)
A = [[1, 2, 3],
[4, 5, 6]]
B = [[1, 2],
[3, 4],
[5, 6]]
def shape(A):
num_rows = len(A)
num_cols = len(A[0]) if A else 0
return num_rows, num_cols
shape(A)
def get_row(A, i):
return A[i]
get_row(A, 1)
def get_column(A, j):
return [A_i[j] for A_i in A]
get_column(A, 2)
def make_matrix(num_rows, num_cols, entry_fn):
returns a num_rows x num_cols matrix
whose (i,j)-th entry is entry_fn(i, j),
entry_fn is a function for generating matrix elements.
return [[entry_fn(i, j)
for j in range(num_cols)]
for i in range(num_rows)]
def entry_add(i, j):
a function for generating matrix elements.
return i+j
make_matrix(5, 5, entry_add)
def is_diagonal(i, j):
1's on the 'diagonal',
0's everywhere else
return 1 if i == j else 0
identity_matrix = make_matrix(5, 5, is_diagonal)
identity_matrix
friendships = [(0, 1),
(0, 2),
(1, 2),
(1, 3),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(5, 7),
(6, 8),
(7, 8),
(8, 9)]
friendships = [[0, 1, 1, 0, 0, 0, 0, 0, 0, 0], # user 0
[1, 0, 1, 1, 0, 0, 0, 0, 0, 0], # user 1
[1, 1, 0, 1, 0, 0, 0, 0, 0, 0], # user 2
[0, 1, 1, 0, 1, 0, 0, 0, 0, 0], # user 3
[0, 0, 0, 1, 0, 1, 0, 0, 0, 0], # user 4
[0, 0, 0, 0, 1, 0, 1, 1, 0, 0], # user 5
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0], # user 6
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0], # user 7
[0, 0, 0, 0, 0, 0, 1, 1, 0, 1], # user 8
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] # user 9
friendships[0][2] == 1 # True, 0 and 2 are friends
def matrix_add(A, B):
if shape(A) != shape(B):
raise ArithmeticError("cannot add matrices with different shapes")
num_rows, num_cols = shape(A)
def entry_fn(i, j): return A[i][j] + B[i][j]
return make_matrix(num_rows, num_cols, entry_fn)
A = make_matrix(5, 5, is_diagonal)
B = make_matrix(5, 5, entry_add)
matrix_add(A, B)
v = [2, 1]
w = [math.sqrt(.25), math.sqrt(.75)]
c = dot(v, w)
vonw = scalar_multiply(c, w)
o = [0,0]
plt.figure(figsize=(4, 5), dpi = 100)
plt.arrow(0, 0, v[0], v[1],
width=0.002, head_width=.1, length_includes_head=True)
plt.annotate("v", v, xytext=[v[0] + 0.01, v[1]])
plt.arrow(0 ,0, w[0], w[1],
width=0.002, head_width=.1, length_includes_head=True)
plt.annotate("w", w, xytext=[w[0] - 0.1, w[1]])
plt.arrow(0, 0, vonw[0], vonw[1], length_includes_head=True)
plt.annotate(u"(v•w)w", vonw, xytext=[vonw[0] - 0.1, vonw[1] + 0.02])
plt.arrow(v[0], v[1], vonw[0] - v[0], vonw[1] - v[1],
linestyle='dotted', length_includes_head=True)
plt.scatter(*zip(v,w,o),marker='.')
plt.axis('equal')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: Vectors
Step7: The dot product measures how far the vector v extends in the w direction.
Step11: Using lists as vectors
Step12: Matrices will be important.
|
1,382
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from harold import *
import matplotlib.pyplot as plt
G1 = Transfer([1, -1],[1, -2, 1, 0], dt=0.1) # discrete
G2 = Transfer([[1, [1, 3]],[0, [1, 2]]], [[[1, 2], [1, 0, -4]],[1, [1, -3]]])
G3 = State([[0, 1], [-0.1, -0.5]], [[0], [1]], [0, 3.5], 1, dt=0.1) # discrete
G4 = State([[-4, -2, 2], [0, -5, 2], [4, -3, -4]], [[1, 1], [0, 1], [2, -1]], [-1, 5, 2])
G1
G5 = Transfer([[[1, 1], 2], [[1, 3], 4]], [1, 5, 1])
print('Numerator :\n', G5.num, '\nDenominator :\n', G5.den)
M = np.array([[-4, -2, 2, 1, 1], [0, -5, 2, 0, 1], [4, -3, -4, 2, -1], [-1, 5, 2, 0, 0]])
G4 = State(*matrix_slice(M, corner_shape=[3, 3], corner='nw'))
G4.matrices
M = np.array([[-4, -2, 2, 1, 1],
[0, -5, 2, 0, 1],
[4, -3, -4, 2, -1],
[-1, 5, 2, 0, 0]])
G4 = State(*matrix_slice(M, corner_shape=[1, 3], corner='sw'))
G1.polynomials
G3.matrices
G4.a
G = random_state_model(5, p=3, m=2)
G
G = random_state_model(20, p=3, m=2, dt=0.01, prob_dist=[0, 0, 0.1, 0.9],stable=False) # 90% osc. modes, 10% integrators
G
H = transfer_to_state(G2)
J = state_to_transfer(G3)
H
J
Hm = minimal_realization(H)
Hm
G6 = Transfer([1], [1, 1.4, 1])
G6d = discretize(G6, dt=0.3, method='zoh')
G6d
from harold._global_constants import _KnownDiscretizationMethods as km
print(*km, sep='\n')
G3.DiscretizedWith is None
G6d.DiscretizedWith
G6dcz = undiscretize(G6d)
G6dcz
G6dct = undiscretize(G6d, method='tustin')
G6dct
G1 * G3
G4 @ G2
CL = feedback(G3, G1)
CL_min = minimal_realization(CL)
CL
CL_min
impulse_response_plot(G4);
G4_d = discretize(G4, 0.1, method='zoh')
impulse_response_plot(G4_d);
bode_plot(G4);
bode_plot(G4, use_db=True, use_hz=False, use_degree=False);
nyquist_plot(G2);
# Define some parameters
M, m, b, I, g, l = 0.5, 0.2, 0.1, 0.006, 9.8, 0.3
p = I*(M+m) + M*m*l**2 #denominator for the A and B matrices
A = np.array([[0, 1, 0, 0], [0, -(I+m*l**2)*b/p, (m**2*g*l**2)/p, 0],[0, 0, 0, 1], [0, -(m*l*b)/p, m*g*l*(M+m)/p, 0]])
B = np.array([[0], [(I+m*l**2)/p], [0], [m*l/p]])
C = np.array([[1, 0, 0, 0], [0, 0, 1, 0]])
sys_ss = State(A,B,C)
print('The system is Kalman controllable:', is_kalman_controllable(sys_ss))
Q = C.T @ C
K, X, eigs = lqr(sys_ss, Q) # R = 1 if omitted
print('Controller K gains : ', K)
sys_cl = State(A-B@K, B, C)
t = np.arange(0, 5, 0.01)
r =0.2*np.ones(len(t))
y, t = simulate_linear_system(sys_cl, u=r, t=t)
fig, ax = plt.subplots(1, 1)
ax.plot(t,y);
ax.grid(which='both')
Q[[0, 2], [0, 2]] = [5000, 100]
K, X, eigs = lqr(sys_ss, Q)
print('New Controller gains : ', K)
sys_cl = State(A-B@K, B, C)
y, t = simulate_linear_system(sys_cl, u=r, t=t)
fig, ax = plt.subplots(1, 1)
ax.plot(t,y);
ax.grid(which='both')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating models
Step2: Internally, the list objects are converted to NumPy arrays. Notice that when we are defining $G_4$, we skipped the $D$ matrix since it is zero. Discrete-time models can be defined with the same syntax but also adding dt keyword with a value in seconds, for example, State(a, b, c, d, dt=1). Getting used to using keywords in Python, both improves the readability and also avoids problems such as State(a, b, c, 1) which would take 1 as the $D$ element instead of the sampling period.
Step3: Now, we can directly identify the pole zero structure right away. Though MIMO state representations should be preferred exclusively, MIMO transfer models explicitly require a list of lists structure. Additionally, common denominators can also be given and it will be expanded to every entry in the denominator list of lists, say, $$G_5 = \frac{1}{s^2+5s+1}\begin{bmatrix}s+1&2\s+3&4\end{bmatrix}$$
Step4: One typical inconvenience entering State models is that, it is much easier to type the whole array $\begin{bmatrix}A&B\C&D\end{bmatrix}$ and slice the individual arrays. However, this needs the extra slice indices which might be also laborious. The matrix_slice() function is provided for this purpose. Hence, suppose that the model data for $G_4$ above is given as an array already. Then creating $G_4$ is basically providing the slices and using the *-argument expansion of Python.
Step5: We could have sliced the array based on the shape of the C array and arrive at the same result
Step6: Further properties can be accessed via the typical dot syntax, let's check the model data of $G_1$
Step7: Also the model data of a State representation, for example $G_3$, or just the $A$ matrix of $G_4$
Step8: Finally we can also create random State models via the random_stable_model
Step9: We can also force the random model to have more chance to have oscillatory modes by changing the probability distribution of the selected poles. Assume that we want a discrete-time model with mostly poles on the imaginary axis and occasional integrators.
Step10: Conversions of Models
Step11: Minimal Realization
Step12: We can see that while on of the $-2$ poles is removed, the others are not cancelled.
Step13: Current known discretization methods are
Step14: The forward xxx, backward xxx are aliases of >> and <<. Similarly, bilinear is an alias of tustin.
Step15: Now, if we convert back these models, $G_3$ will be converted using the default method tustin however $G_{6d}$ will be converted via zero-order hold method. Had this information not present, the resulting continuous model would be slightly different than what we started with.
Step16: Model algebra
Step17: We observe that there is a pole/zero cancellation at 1. which is removed afterwards.
Step18: Basic Plotting Functionality
Step19: The discrete-time plant plots are automatically drawn as stairs.
Step20: The plot units can be changed via the keywords available
Step21: An LQR example
Step22: Now, the weights on the position states are increased
|
1,383
|
<ASSISTANT_TASK:>
Python Code:
!pip install spacy nltk
text = 'Please would you tell me,' said Alice, a little timidly, for she was not quite sure whether it was good manners for her to speak first, 'why your cat grins like that?'
'It's a Cheshire cat,' said the Duchess, 'and that's why. Pig!'
She said the last word with such sudden violence that Alice quite jumped; but she saw in another moment that it was addressed to the baby, and not to her, so she took courage, and went on again:—
'I didn't know that Cheshire cats always grinned; in fact, I didn't know that cats could grin.'
'They all can,' said the Duchess; 'and most of 'em do.'
'I don't know of any that do,' Alice said very politely, feeling quite pleased to have got into a conversation.
'You don't know much,' said the Duchess; 'and that's a fact.'
import spacy
import spacy.en.download
# spacy.en.download.main()
processor = spacy.en.English()
processed_text = processor(text)
processed_text
n = 0
for sentence in processed_text.sents:
print(n, sentence)
n+=1
n = 0
for sentence in processed_text.sents:
for token in sentence:
print(n, token, token.pos_, token.lemma_)
n+=1
for entity in processed_text.ents:
print(entity, entity.label_)
for noun_chunk in processed_text.noun_chunks:
print(noun_chunk)
def pr_tree(word, level):
if word.is_punct:
return
for child in word.lefts:
pr_tree(child, level+1)
print('\t'* level + word.text + ' - ' + word.dep_)
for child in word.rights:
pr_tree(child, level+1)
for sentence in processed_text.sents:
pr_tree(sentence.root, 0)
print('-------------------------------------------')
proc_fruits = processor('''I think green apples are delicious.
While pears have a strange texture to them.
The bowls they sit in are ugly.''')
apples, pears, bowls = proc_fruits.sents
fruit = processed_text.vocab['fruit']
print(apples.similarity(fruit))
print(pears.similarity(fruit))
print(bowls.similarity(fruit))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Spacy Documentation
Step3: Download and load the model. SpaCy has an excellent English NLP processor. It has the following features which we shall explore
Step4: Looks like the same text? Let's dig a little deeper
Step5: Words and Punctuation - Along with POS tagging
Step6: Entities - Explanation of Entity Types
Step7: Noun Chunks
Step8: The Semi Holy Grail - Syntactic Depensy Parsing See Demo for clarity
Step9: What is 'nsubj'? 'acomp'? See The Universal Dependencies
|
1,384
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (15,8)
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
disaster_data = np.array([ 4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])
years = np.arange(1851, 1962)
plt.plot(years, disaster_data, 'o', markersize=8);
plt.ylabel('Disaster count')
plt.xlabel('Year')
plt.title('Mining disaster data set')
plt.show()
def disaster_count_model(disaster_rate_fn):
disaster_count = tfd.JointDistributionNamed(dict(
e=tfd.Exponential(rate=1.),
l=tfd.Exponential(rate=1.),
s=tfd.Uniform(0., high=len(years)),
d_t=lambda s, l, e: tfd.Independent(
tfd.Poisson(rate=disaster_rate_fn(np.arange(len(years)), s, l, e)),
reinterpreted_batch_ndims=1)
))
return disaster_count
def disaster_rate_switch(ys, s, l, e):
return tf.where(ys < s, e, l)
def disaster_rate_sigmoid(ys, s, l, e):
return e + tf.sigmoid(ys - s) * (l - e)
model_switch = disaster_count_model(disaster_rate_switch)
model_sigmoid = disaster_count_model(disaster_rate_sigmoid)
def target_log_prob_fn(model, s, e, l):
return model.log_prob(s=s, e=e, l=l, d_t=disaster_data)
models = [model_switch, model_sigmoid]
print([target_log_prob_fn(m, 40., 3., .9).numpy() for m in models]) # Somewhat likely result
print([target_log_prob_fn(m, 60., 1., 5.).numpy() for m in models]) # Rather unlikely result
print([target_log_prob_fn(m, -10., 1., 1.).numpy() for m in models]) # Impossible result
num_results = 10000
num_burnin_steps = 3000
@tf.function(autograph=False, experimental_compile=True)
def make_chain(target_log_prob_fn):
kernel = tfp.mcmc.TransformedTransitionKernel(
inner_kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
step_size=0.05,
num_leapfrog_steps=3),
bijector=[
# The switchpoint is constrained between zero and len(years).
# Hence we supply a bijector that maps the real numbers (in a
# differentiable way) to the interval (0;len(yers))
tfb.Sigmoid(low=0., high=tf.cast(len(years), dtype=tf.float32)),
# Early and late disaster rate: The exponential distribution is
# defined on the positive real numbers
tfb.Softplus(),
tfb.Softplus(),
])
kernel = tfp.mcmc.SimpleStepSizeAdaptation(
inner_kernel=kernel,
num_adaptation_steps=int(0.8*num_burnin_steps))
states = tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=[
# The three latent variables
tf.ones([], name='init_switchpoint'),
tf.ones([], name='init_early_disaster_rate'),
tf.ones([], name='init_late_disaster_rate'),
],
trace_fn=None,
kernel=kernel)
return states
switch_samples = [s.numpy() for s in make_chain(
lambda *args: target_log_prob_fn(model_switch, *args))]
sigmoid_samples = [s.numpy() for s in make_chain(
lambda *args: target_log_prob_fn(model_sigmoid, *args))]
switchpoint, early_disaster_rate, late_disaster_rate = zip(
switch_samples, sigmoid_samples)
def _desc(v):
return '(median: {}; 95%ile CI: $[{}, {}]$)'.format(
*np.round(np.percentile(v, [50, 2.5, 97.5]), 2))
for t, v in [
('Early disaster rate ($e$) posterior samples', early_disaster_rate),
('Late disaster rate ($l$) posterior samples', late_disaster_rate),
('Switch point ($s$) posterior samples', years[0] + switchpoint),
]:
fig, ax = plt.subplots(nrows=1, ncols=2, sharex=True)
for (m, i) in (('Switch', 0), ('Sigmoid', 1)):
a = ax[i]
a.hist(v[i], bins=50)
a.axvline(x=np.percentile(v[i], 50), color='k')
a.axvline(x=np.percentile(v[i], 2.5), color='k', ls='dashed', alpha=.5)
a.axvline(x=np.percentile(v[i], 97.5), color='k', ls='dashed', alpha=.5)
a.set_title(m + ' model ' + _desc(v[i]))
fig.suptitle(t)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 베이지안 변환점(switchpoint) 분석
Step2: 데이터세트
Step3: 확률 모델
Step4: 위의 코드는 JointDistributionSequential 분포를 통해 모델을 정의합니다. disaster_rate 함수는 [0, ..., len(years)-1]의 배열로 호출되어 len(years) 확률 변수의 벡터를 생성합니다. switchpoint 이전의 연도는 early_disaster_rate이고, 이후의 연도는 late_disaster_rate입니다(시그모이드 전환에 대해 modulo 연산 수행).
Step5: 베이지안 추론을 수행하는 HMC
Step6: 두 모델을 병렬로 실행합니다.
|
1,385
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
def char_probs(s):
l = len(s)
dic = {l}
for i in l:
prob = i/l
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
# YOUR CODE HERE
raise NotImplementedError()
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
# YOUR CODE HERE
raise NotImplementedError()
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the pi digits histogram
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Character counting and entropy
Step4: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step5: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
|
1,386
|
<ASSISTANT_TASK:>
Python Code:
import openchemistry as oc
mol = oc.find_structure('InChI=1S/C6H6/c1-2-4-6-5-3-1/h1-6H')
mol.structure.show()
image_name = 'openchemistry/chemml:0.6.0'
input_parameters = {}
result = mol.calculate(image_name, input_parameters)
result.properties.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start by finding structures using online databases (or cached local results). This uses an InChI for a known structure that will be added if not already present using Open Babel.
Step2: Set up the calculation, by specifying the name of the Docker image that will be used, and by providing input parameters that are known to the specific image
Step3: Predict Properties from ML Model
|
1,387
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install git+https://github.com/tensorflow/examples.git
!pip install -U tfds-nightly
import tensorflow as tf
from tensorflow_examples.models.pix2pix import pix2pix
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from IPython.display import clear_output
import matplotlib.pyplot as plt
dataset, info = tfds.load('oxford_iiit_pet:3.*.*', with_info=True)
def normalize(input_image, input_mask):
input_image = tf.cast(input_image, tf.float32) / 255.0
input_mask -= 1
return input_image, input_mask
@tf.function
def load_image_train(datapoint):
input_image = tf.image.resize(datapoint['image'], (128, 128))
input_mask = tf.image.resize(datapoint['segmentation_mask'], (128, 128))
if tf.random.uniform(()) > 0.5:
input_image = tf.image.flip_left_right(input_image)
input_mask = tf.image.flip_left_right(input_mask)
input_image, input_mask = normalize(input_image, input_mask)
return input_image, input_mask
def load_image_test(datapoint):
input_image = tf.image.resize(datapoint['image'], (128, 128))
input_mask = tf.image.resize(datapoint['segmentation_mask'], (128, 128))
input_image, input_mask = normalize(input_image, input_mask)
return input_image, input_mask
TRAIN_LENGTH = info.splits['train'].num_examples
BATCH_SIZE = 64
BUFFER_SIZE = 1000
STEPS_PER_EPOCH = TRAIN_LENGTH // BATCH_SIZE
train = dataset['train'].map(load_image_train, num_parallel_calls=tf.data.experimental.AUTOTUNE)
test = dataset['test'].map(load_image_test)
train_dataset = train.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat()
train_dataset = train_dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
test_dataset = test.batch(BATCH_SIZE)
def display(display_list):
plt.figure(figsize=(15, 15))
title = ['Input Image', 'True Mask', 'Predicted Mask']
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i+1)
plt.title(title[i])
plt.imshow(tf.keras.preprocessing.image.array_to_img(display_list[i]))
plt.axis('off')
plt.show()
for image, mask in train.take(1):
sample_image, sample_mask = image, mask
display([sample_image, sample_mask])
OUTPUT_CHANNELS = 3
base_model = tf.keras.applications.MobileNetV2(input_shape=[128, 128, 3], include_top=False)
#이 층들의 활성화를 이용합시다
layer_names = [
'block_1_expand_relu', # 64x64
'block_3_expand_relu', # 32x32
'block_6_expand_relu', # 16x16
'block_13_expand_relu', # 8x8
'block_16_project', # 4x4
]
layers = [base_model.get_layer(name).output for name in layer_names]
# 특징추출 모델을 만듭시다
down_stack = tf.keras.Model(inputs=base_model.input, outputs=layers)
down_stack.trainable = False
up_stack = [
pix2pix.upsample(512, 3), # 4x4 -> 8x8
pix2pix.upsample(256, 3), # 8x8 -> 16x16
pix2pix.upsample(128, 3), # 16x16 -> 32x32
pix2pix.upsample(64, 3), # 32x32 -> 64x64
]
def unet_model(output_channels):
inputs = tf.keras.layers.Input(shape=[128, 128, 3])
x = inputs
# 모델을 통해 다운샘플링합시다
skips = down_stack(x)
x = skips[-1]
skips = reversed(skips[:-1])
# 건너뛰기 연결을 업샘플링하고 설정하세요
for up, skip in zip(up_stack, skips):
x = up(x)
concat = tf.keras.layers.Concatenate()
x = concat([x, skip])
# 이 모델의 마지막 층입니다
last = tf.keras.layers.Conv2DTranspose(
output_channels, 3, strides=2,
padding='same') #64x64 -> 128x128
x = last(x)
return tf.keras.Model(inputs=inputs, outputs=x)
model = unet_model(OUTPUT_CHANNELS)
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
tf.keras.utils.plot_model(model, show_shapes=True)
def create_mask(pred_mask):
pred_mask = tf.argmax(pred_mask, axis=-1)
pred_mask = pred_mask[..., tf.newaxis]
return pred_mask[0]
def show_predictions(dataset=None, num=1):
if dataset:
for image, mask in dataset.take(num):
pred_mask = model.predict(image)
display([image[0], mask[0], create_mask(pred_mask)])
else:
display([sample_image, sample_mask,
create_mask(model.predict(sample_image[tf.newaxis, ...]))])
show_predictions()
class DisplayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
clear_output(wait=True)
show_predictions()
print ('\n에포크 이후 예측 예시 {}\n'.format(epoch+1))
EPOCHS = 20
VAL_SUBSPLITS = 5
VALIDATION_STEPS = info.splits['test'].num_examples//BATCH_SIZE//VAL_SUBSPLITS
model_history = model.fit(train_dataset, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_steps=VALIDATION_STEPS,
validation_data=test_dataset,
callbacks=[DisplayCallback()])
loss = model_history.history['loss']
val_loss = model_history.history['val_loss']
epochs = range(EPOCHS)
plt.figure()
plt.plot(epochs, loss, 'r', label='Training loss')
plt.plot(epochs, val_loss, 'bo', label='Validation loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss Value')
plt.ylim([0, 1])
plt.legend()
plt.show()
show_predictions(test_dataset, 3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 이미지 분할
Step2: Oxford-IIIT Pets 데이터 세트를 다운로드 하기
Step3: 다음 코드는 이미지를 뒤집는 간단한 확장을 수행합니다. 또한, 영상이 [0,1]로 정규화됩니다. 마지막으로, 위에서 언급한 것처럼 분할 마스크의 픽셀에 {1, 2, 3}이라는 레이블이 붙습니다. 편의성을 위해 분할 마스크에서 1을 빼서 레이블이 {0, 1, 2}이 되도록 합시다.
Step4: 데이터 세트에는 이미 필요한 몫의 시험과 훈련이 포함되어 있으므로 동일한 분할을 계속 사용합시다.
Step5: 이미지 예제와 데이터 세트에서 대응하는 마스크를 보도록 합시다.
Step6: 모델 정의하기
Step7: 언급된 바와 같이 인코더는 미리 훈련된 MobileNetV2 모델이 될 것이며, tf.keras.applications에서 사용될 준비가 될 것입니다. 인코더는 모델의 중간 층에서 나오는 특정 출력으로 구성됩니다. 인코더는 교육 과정 중에 학습되지 않는다는 점에 유의하세요.
Step8: 디코더/업샘플러는 TensorFlow 예제에서 구현된 일련의 업샘플 블록입니다.
Step9: 모델 훈련하기
Step10: 만들어진 모델의 구조를 간략히 살펴 보겠습니다.
Step11: 모델을 시험해보고 훈련 전에 예측한 것이 무엇인지 알아봅시다.
Step12: 모델이 훈련하는 동안 어떻게 향상되는지 관찰해 봅시다. 이 작업을 수행하기 위해 콜백 함수가 아래에 정의되어 있습니다.
Step13: 예측하기
|
1,388
|
<ASSISTANT_TASK:>
Python Code::
import cv2
import numpy as np
array_of_image = np.array(image)
image_rgb = cv2.cvtColor(array_of_image, cv2.COLOR_BGR2RGB)
cv2.imshow('image', image_rgb)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,389
|
<ASSISTANT_TASK:>
Python Code:
def get_result_df(y_true_unscale, y_pred_unscale, ano_index, look_back,target_col='cpu_usage'):
Add prediction and anomaly value to dataframe.
result_df = pd.DataFrame({"y_true": y_true_unscale.squeeze(), "y_pred": y_pred_unscale.squeeze()})
result_df['anomalies'] = 0
result_df.loc[result_df.index[ano_index], 'anomalies'] = 1
result_df['anomalies'] = result_df['anomalies'] > 0
return result_df
def plot_anomalies_value(date, y_true, y_pred, anomalies):
plot the anomalies value
fig, axs = plt.subplots(figsize=(16,6))
axs.plot(date, y_true,color='blue', label='y_true')
axs.plot(date, y_pred,color='orange', label='y_pred')
axs.scatter(date[anomalies].tolist(), y_true[anomalies], color='red', label='anomalies value')
axs.set_title('the anomalies value')
plt.xlabel('datetime')
plt.legend(loc='upper left')
plt.show()
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df_1932 = pd.read_csv("m_1932.csv", header=None, usecols=[1,2,3], names=["time_step", "cpu_usage","mem_usage"])
df_1932.head()
df_1932.sort_values(by="time_step", inplace=True)
df_1932.reset_index(inplace=True)
df_1932.sort_values(by="time_step").plot(y="cpu_usage", x="time_step", figsize=(16,6),title="cpu_usage of machine 1932")
df_1932.reset_index(inplace=True)
df_1932["time_step"] = pd.to_datetime(df_1932["time_step"], unit='s', origin=pd.Timestamp('2018-01-01'))
from bigdl.chronos.data import TSDataset
from sklearn.preprocessing import StandardScaler
# we look back one hour data which is of the frequency of 1min.
look_back = 60
horizon = 1
tsdata_train, tsdata_val, tsdata_test = TSDataset.from_pandas(df_1932, dt_col="time_step", target_col="cpu_usage", with_split=True, val_ratio = 0.1, test_ratio=0.1)
standard_scaler = StandardScaler()
for tsdata in [tsdata_train, tsdata_val, tsdata_test]:
tsdata.resample(interval='1min', merge_mode="mean")\
.impute(mode="last")\
.gen_dt_feature()\
.scale(standard_scaler, fit=(tsdata is tsdata_train))\
.roll(lookback=look_back, horizon=horizon, feature_col = ["HOUR"])\
x_train, y_train = tsdata_train.to_numpy()
x_val, y_val = tsdata_val.to_numpy()
x_test, y_test = tsdata_test.to_numpy()
y_train, y_val, y_test = y_train[:, 0, :], y_val[:, 0, :], y_test[:, 0, :]
x_train.shape, y_train.shape, x_val.shape, y_val.shape, x_test.shape, y_test.shape
from bigdl.chronos.forecaster.tf.mtnet_forecaster import MTNetForecaster
mtnet_forecaster = MTNetForecaster(target_dim=horizon,
feature_dim=x_train.shape[-1],
long_series_num=3,
series_length=15
)
%%time
mtnet_forecaster.fit(data=(x_train, y_train), batch_size=128, epochs=20)
y_pred_val = mtnet_forecaster.predict(x_val)
y_pred_test = mtnet_forecaster.predict(x_test)
y_pred_val_unscale = tsdata_val.unscale_numpy(np.expand_dims(y_pred_val, axis=1))[:, 0, :]
y_pred_test_unscale = tsdata_test.unscale_numpy(np.expand_dims(y_pred_test, axis=1))[:, 0, :]
y_val_unscale = tsdata_val.unscale_numpy(np.expand_dims(y_val, axis=1))[:, 0, :]
y_test_unscale = tsdata_test.unscale_numpy(np.expand_dims(y_test, axis=1))[:, 0, :]
# evaluate with sMAPE
from bigdl.orca.automl.metrics import Evaluator
smape = Evaluator.evaluate("smape", y_test_unscale, y_pred_test_unscale)
print(f"sMAPE is {'%.2f' % smape}")
from bigdl.chronos.detector.anomaly import ThresholdDetector
ratio=0.01
thd=ThresholdDetector()
thd.set_params(ratio=ratio)
thd.fit(y_val_unscale,y_pred_val_unscale)
print("The threshold of validation dataset is:",thd.th)
anomaly_scores_val = thd.score()
val_res_ano_idx = np.where(anomaly_scores_val > 0)[0]
print("The index of anomalies in validation dataset is:",val_res_ano_idx)
anomaly_scores_test = thd.score(y_test_unscale,y_pred_test_unscale)
test_res_ano_idx = np.where(anomaly_scores_test > 0)[0]
print("The index of anoalies in test dataset is:",test_res_ano_idx)
val_result_df = get_result_df(y_val_unscale, y_pred_val_unscale, val_res_ano_idx, look_back)
test_result_df = get_result_df(y_test_unscale, y_pred_test_unscale, test_res_ano_idx, look_back)
plot_anomalies_value(val_result_df.index, val_result_df.y_true, val_result_df.y_pred, val_result_df.anomalies)
plot_anomalies_value(test_result_df.index, test_result_df.y_true, test_result_df.y_pred, test_result_df.anomalies)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Unsupervised Anomaly Detection based on Forecasts
Step3: Download raw dataset and load into dataframe
Step4: Below are some example records of the data
Step5: Data pre-processing
Step6: Feature Engineering & Data Preperation
Step7: Time series forecasting
Step8: First, we initialize a mtnet_forecaster according to input data shape. Specifcally, look_back should equal (long_series_num+1)*series_length . Details refer to chronos docs <a href="https
Step9: Now we train the model and wait till it finished.
Step10: Use the model for prediction and inverse the scaling of the prediction results.
Step11: Calculate the symetric mean absolute percentage error.
Step12: Anomaly detection
Step13: Get a new dataframe which contains y_true,y_pred,anomalies value.
Step14: Draw anomalies in line chart.
|
1,390
|
<ASSISTANT_TASK:>
Python Code:
type('I am amazing!')
type(145)
type(2.5)
print("Hello World")
"Hello World"
"Hello World";
WHALE = 'Orca'
number_of_whales = 10
weight_of_1_whale = 5003.2
print(number_of_whales)
1 + 2
fish = 15
fish_left = fish - 3
print(fish_left)
print(3 * 2.1)
number_of_whales ** 2
print(5 / 2)
number_of_whales = 8
number_of_whales = number_of_whales + 2
print(number_of_whales)
2 * 3 + 4 / 2
(2 * (3 + 4)) / 2
5 % 2
print('Hello ' + 'Coding Circle')
print("The " + WHALE + " lives in the sea.")
print("My name is" + "Charlotte")
my_name = input()
print(my_name)
favorite_ocean_animal = input("What is your favorite sea creature?\n")
print("The " + favorite_ocean_animal + " is so cool!")
number_of_fish = input("How many fish do you want?\n")
number_of_fish_int = int(number_of_fish)
print(number_of_fish_int * 1.05)
# Calculate the price of fish that a user wants
number_of_fish = input("How many fish do you want?\n") # Ask user for quantity of fish
number_of_fish_int = int(number_of_fish) # raw_input returns string, so convert to integer
print(number_of_fish_int * 1.05) # multiply by price of fish
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To print a value to the screen, we use the function print()
Step2: Jupyter notebooks will always print the value of the last line so you don't have to. You can suppress this with a semicolon ';'
Step3: TRY IT
Step4: Notice that when you ran that, nothing printed out. To print a variable, you use the same statement you would use to print the value. e.g. print(WHALE)
Step5: TRY IT
Step6: Hint
Step7: TRY IT
Step8: Modulus operator
Step9: TRY IT
Step10: Hint
Step11: TRY IT
Step12: You can pass a string to the input() function to prompt the user for what you are looking for.
Step13: If you want the user to enter a number, you will have to convert the string. Here are the conversion commands.
Step14: TRY IT
|
1,391
|
<ASSISTANT_TASK:>
Python Code:
consume_plot()
interact(consume_plot,p1=(pmin,pmax,0.1),p2=(pmin,pmax,0.1), I=(Imin,Imax,10),alpha=(0.05,0.95,0.05));
consume_plot2(r, delta, rho, y1, y2)
interact(consume_plot2, r=(rmin,rmax,0.1), rho=fixed(rho), delta=(0.5,1,0.1), y1=(10,100,1), y2=(10,100,1));
c1e, c2e, uebar = find_opt2(r, rho, delta, y1, y2)
c1e, c2e
y1,y2
y1-c1e
y1,y2 = 20,80
consume_plot2(r, delta, rho, y1, y2)
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact, fixed
def U(c1, c2, alpha):
return (c1**alpha)*(c2**(1-alpha))
def budgetc(c1, p1, p2, I):
return (I/p2)-(p1/p2)*c1
def indif(c1, ubar, alpha):
return (ubar/(c1**alpha))**(1/(1-alpha))
def find_opt(p1,p2,I,alpha):
c1 = alpha * I/p1
c2 = (1-alpha)*I/p2
u = U(c1,c2,alpha)
return c1, c2, u
alpha = 0.5
p1, p2 = 1, 1
I = 100
pmin, pmax = 1, 4
Imin, Imax = 10, 200
cmax = (3/4)*Imax/pmin
def consume_plot(p1=p1, p2=p2, I=I, alpha=alpha):
c1 = np.linspace(0.1,cmax,num=100)
c1e, c2e, uebar = find_opt(p1, p2 ,I, alpha)
idfc = indif(c1, uebar, alpha)
budg = budgetc(c1, p1, p2, I)
fig, ax = plt.subplots(figsize=(8,8))
ax.plot(c1, budg, lw=2.5)
ax.plot(c1, idfc, lw=2.5)
ax.vlines(c1e,0,c2e, linestyles="dashed")
ax.hlines(c2e,0,c1e, linestyles="dashed")
ax.plot(c1e,c2e,'ob')
ax.set_xlim(0, cmax)
ax.set_ylim(0, cmax)
ax.set_xlabel(r'$c_1$', fontsize=16)
ax.set_ylabel('$c_2$', fontsize=16)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.grid()
plt.show()
def u(c, rho):
return (1/rho)* c**(1-rho)
def U2(c1, c2, rho, delta):
return u(c1, rho) + delta*u(c2, rho)
def budget2(c1, r, y1, y2):
Ey = y1 + y2/(1+r)
return Ey*(1+r) - c1*(1+r)
def indif2(c1, ubar, rho, delta):
return ( ((1-rho)/delta)*(ubar - u(c1, rho)) )**(1/(1-rho))
def find_opt2(r, rho, delta, y1, y2):
Ey = y1 + y2/(1+r)
A = (delta*(1+r))**(1/rho)
c1 = Ey/(1+A/(1+r))
c2 = c1*A
u = U2(c1, c2, rho, delta)
return c1, c2, u
rho = 0.5
delta = 1
r = 0
y1, y2 = 80, 20
rmin, rmax = 0, 1
cmax = 150
def consume_plot2(r, delta, rho, y1, y2):
c1 = np.linspace(0.1,cmax,num=100)
c1e, c2e, uebar = find_opt2(r, rho, delta, y1, y2)
idfc = indif2(c1, uebar, rho, delta)
budg = budget2(c1, r, y1, y2)
fig, ax = plt.subplots(figsize=(8,8))
ax.plot(c1, budg, lw=2.5)
ax.plot(c1, idfc, lw=2.5)
ax.vlines(c1e,0,c2e, linestyles="dashed")
ax.hlines(c2e,0,c1e, linestyles="dashed")
ax.plot(c1e,c2e,'ob')
ax.vlines(y1,0,y2, linestyles="dashed")
ax.hlines(y2,0,y1, linestyles="dashed")
ax.plot(y1,y2,'ob')
ax.text(y1-6,y2-6,r'$y^*$',fontsize=16)
ax.set_xlim(0, cmax)
ax.set_ylim(0, cmax)
ax.set_xlabel(r'$c_1$', fontsize=16)
ax.set_ylabel('$c_2$', fontsize=16)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.grid()
plt.show()
interact(consume_plot2, r=(rmin,rmax,0.1), rho=fixed(rho), delta=(0.5,1,0.1), y1=(10,100,1), y2=(10,100,1));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The consumer's optimum
Step2: The expenditure function
Step3: Interactive plot with sliders (visible if if running on a notebook server)
Step4: In this particular case the consumer consumes
Step5: Her endowment is
Step6: And she therefore saves
Step7: in period 1.
Step8: <a id='codesection'></a>
Step9: Code for simple consumer choice
Step10: Parameters for default plot
Step11: Code for intertemporal Choice model
Step12: Parameters for default plot
Step13: Interactive plot
|
1,392
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
loans = graphlab.SFrame('lending-club-data.gl/')
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
train_data, validation_set = loans_data.random_split(.8, seed=1)
def reached_minimum_node_size(data, min_node_size):
# Return True if the number of data points is less than or equal to the minimum node size.
if len(data) <= min_node_size:
return True
def error_reduction(error_before_split, error_after_split):
# Return the error before the split minus the error after the split.
error = error_before_split - error_after_split
return error
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
num_of_positive = (labels_in_node == +1).sum()
# Count the number of -1's (risky loans)
num_of_negative = (labels_in_node == -1).sum()
# Return the number of mistakes that the majority classifier makes.
return num_of_negative if num_of_positive > num_of_negative else num_of_positive
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes = intermediate_node_num_mistakes(left_split[target])
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split[target])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = (left_mistakes + right_mistakes) / num_data_points
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_feature = feature
best_error = error
return best_feature # Return the best feature we found
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True } ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = +1
else:
leaf['prediction'] = -1
# Return the leaf node
return leaf
def decision_tree_create(data, features, target, current_depth = 0,
max_depth = 10, min_node_size=1,
min_error_reduction=0.0):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1: All nodes are of the same type.
if intermediate_node_num_mistakes(target_values) == 0:
print "Stopping condition 1 reached. All data points have the same target value."
return create_leaf(target_values)
# Stopping condition 2: No more features to split on.
if remaining_features == []:
print "Stopping condition 2 reached. No remaining features."
return create_leaf(target_values)
# Early stopping condition 1: Reached max depth limit.
if current_depth >= max_depth:
print "Early stopping condition 1 reached. Reached maximum depth."
return create_leaf(target_values)
# Early stopping condition 2: Reached the minimum node size.
# If the number of data points is less than or equal to the minimum size, return a leaf.
if reached_minimum_node_size(data, min_node_size):
print "Early stopping condition 2 reached. Reached minimum node size."
return create_leaf(target_values)
# Find the best splitting feature
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
# Early stopping condition 3: Minimum error reduction
# Calculate the error before splitting (number of misclassified examples
# divided by the total number of examples)
error_before_split = intermediate_node_num_mistakes(target_values) / float(len(data))
# Calculate the error after splitting (number of misclassified examples
# in both groups divided by the total number of examples)
left_mistakes = intermediate_node_num_mistakes(left_split[target])
right_mistakes = intermediate_node_num_mistakes(right_split[target])
error_after_split = (left_mistakes + right_mistakes) / float(len(data))
# If the error reduction is LESS THAN OR EQUAL TO min_error_reduction, return a leaf.
if error_reduction(error_before_split, error_after_split) <= min_error_reduction: ## YOUR CODE HERE
print "Early stopping condition 3 reached. Minimum error reduction."
return create_leaf(target_values)
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
small_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 10, min_error_reduction=0.0)
if count_nodes(small_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_decision_tree)
print 'Number of nodes that should be there : 5'
my_decision_tree_new = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 100, min_error_reduction=0.0)
my_decision_tree_old = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
validation_set[0]
print 'Predicted class: %s ' % classify(my_decision_tree_new, validation_set[0])
classify(my_decision_tree_new, validation_set[0], annotate = True)
classify(my_decision_tree_old, validation_set[0], annotate = True)
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error and return it
## YOUR CODE HERE
num_of_mistakes = (prediction != data[target]).sum()/float(len(data))
return num_of_mistakes
evaluate_classification_error(my_decision_tree_new, validation_set)
evaluate_classification_error(my_decision_tree_old, validation_set)
model_1 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 0, min_error_reduction=-1)
model_2 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_3 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 14,
min_node_size = 0, min_error_reduction=-1)
print "Training data, classification error (model 1):", evaluate_classification_error(model_1, train_data)
print "Training data, classification error (model 2):", evaluate_classification_error(model_2, train_data)
print "Training data, classification error (model 3):", evaluate_classification_error(model_3, train_data)
print "validation_set, classification error (model 1):", evaluate_classification_error(model_1, validation_set)
print "validation_set, classification error (model 2):", evaluate_classification_error(model_2, validation_set)
print "validation_set, classification error (model 3):", evaluate_classification_error(model_3, validation_set)
def count_leaves(tree):
if tree['is_leaf']:
return 1
return count_leaves(tree['left']) + count_leaves(tree['right'])
print "Number of nodes (model 1):", count_leaves(model_1)
print "Number of nodes (model 2):", count_leaves(model_2)
print "Number of nodes (model 3):", count_leaves(model_3)
model_4 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_5 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=0)
model_6 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=5)
print "Validation data, classification error (model 4):", evaluate_classification_error(model_4, validation_set)
print "Validation data, classification error (model 5):", evaluate_classification_error(model_5, validation_set)
print "Validation data, classification error (model 6):", evaluate_classification_error(model_6, validation_set)
print "Number of nodes (model 4):", count_leaves(model_4)
print "Number of nodes (model 5):", count_leaves(model_5)
print "Number of nodes (model 6):", count_leaves(model_6)
model_7 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_8 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 2000, min_error_reduction=-1)
model_9 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 50000, min_error_reduction=-1)
print "Validation data, classification error (model 7):", evaluate_classification_error(model_7, validation_set)
print "Validation data, classification error (model 8):", evaluate_classification_error(model_8, validation_set)
print "Validation data, classification error (model 9):", evaluate_classification_error(model_9, validation_set)
print "Number of nodes (model 7):", count_leaves(model_7)
print "Number of nodes (model 8):", count_leaves(model_8)
print "Number of nodes (model 9):", count_leaves(model_9)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load LendingClub Dataset
Step2: As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
Step3: We will be using the same 4 categorical features as in the previous assignment
Step4: Subsample dataset to make sure classes are balanced
Step5: Note
Step6: The feature columns now look like this
Step7: Train-Validation split
Step8: Early stopping methods for decision trees
Step9: Quiz question
Step10: Quiz question
Step11: We then wrote a function best_splitting_feature that finds the best feature to split on given the data and a list of features to consider.
Step12: Finally, recall the function create_leaf from the previous assignment, which creates a leaf node given a set of target values.
Step13: Incorporating new early stopping conditions in binary decision tree implementation
Step14: Here is a function to count the nodes in your tree
Step15: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step16: Build a tree!
Step17: Let's now train a tree model ignoring early stopping conditions 2 and 3 so that we get the same tree as in the previous assignment. To ignore these conditions, we set min_node_size=0 and min_error_reduction=-1 (a negative value).
Step18: Making predictions
Step19: Now, let's consider the first example of the validation set and see what the my_decision_tree_new model predicts for this data point.
Step20: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step21: Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as my_decision_tree_old.
Step22: Quiz question
Step23: Now, let's use this function to evaluate the classification error of my_decision_tree_new on the validation_set.
Step24: Now, evaluate the validation error using my_decision_tree_old.
Step25: Quiz question
Step26: Evaluating the models
Step27: Now evaluate the classification error on the validation data.
Step28: Quiz Question
Step29: Compute the number of nodes in model_1, model_2, and model_3.
Step30: Quiz question
Step31: Calculate the accuracy of each model (model_4, model_5, or model_6) on the validation set.
Step32: Using the count_leaves function, compute the number of leaves in each of each models in (model_4, model_5, and model_6).
Step33: Quiz Question
Step34: Now, let us evaluate the models (model_7, model_8, or model_9) on the validation_set.
Step35: Using the count_leaves function, compute the number of leaves in each of each models (model_7, model_8, and model_9).
|
1,393
|
<ASSISTANT_TASK:>
Python Code:
# %load simulate_deployment_scenario.py
from __future__ import print_function
from typing import List, Tuple, Sequence
from collections import namedtuple
import random
import simpy
from serversim import *
def simulate_deployment_scenario(num_users, weight1, weight2, server_range1,
server_range2):
# type: (int, float, float, Sequence[int], Sequence[int]) -> Result
Result = namedtuple("Result", ["num_users", "weight1", "weight2", "server_range1",
"server_range2", "servers", "grp"])
def cug(mid, delta):
Computation units generator
def f():
return random.uniform(mid - delta, mid + delta)
return f
def ld_bal(svc_name):
Application server load-balancer.
if svc_name == "svc_1":
svr = random.choice(servers1)
elif svc_name == "svc_2":
svr = random.choice(servers2)
else:
assert False, "Invalid service type."
return svr
simtime = 200
hw_threads = 10
sw_threads = 20
speed = 20
svc_1_comp_units = 2.0
svc_2_comp_units = 1.0
quantiles = (0.5, 0.95, 0.99)
env = simpy.Environment()
n_servers = max(server_range1[-1] + 1, server_range2[-1] + 1)
servers = [Server(env, hw_threads, sw_threads, speed, "AppServer_%s" % i)
for i in range(n_servers)]
servers1 = [servers[i] for i in server_range1]
servers2 = [servers[i] for i in server_range2]
svc_1 = CoreSvcRequester(env, "svc_1", cug(svc_1_comp_units,
svc_1_comp_units*.9), ld_bal)
svc_2 = CoreSvcRequester(env, "svc_2", cug(svc_2_comp_units,
svc_2_comp_units*.9), ld_bal)
weighted_txns = [(svc_1, weight1),
(svc_2, weight2)
]
min_think_time = 2.0 # .5 # 4
max_think_time = 10.0 # 1.5 # 20
svc_req_log = [] # type: List[Tuple[str, SvcRequest]]
grp = UserGroup(env, num_users, "UserTypeX", weighted_txns, min_think_time,
max_think_time, quantiles, svc_req_log)
grp.activate_users()
env.run(until=simtime)
return Result(num_users=num_users, weight1=weight1, weight2=weight2,
server_range1=server_range1, server_range2=server_range2,
servers=servers, grp=grp)
# %load print_results.py
from __future__ import print_function
from typing import Sequence, Any, IO
from serversim import *
def print_results(num_users=None, weight1=None, weight2=None, server_range1=None,
server_range2=None, servers=None, grp=None, fi=None):
# type: (int, float, float, Sequence[int], Sequence[int], Sequence[Server], UserGroup, IO[str]) -> None
if fi is None:
import sys
fi = sys.stdout
print("\n\n***** Start Simulation --", num_users, ",", weight1, ",", weight2, ", [", server_range1[0], ",", server_range1[-1] + 1,
") , [", server_range2[0], ",", server_range2[-1] + 1, ") *****", file=fi)
print("Simulation: num_users =", num_users, file=fi)
print("<< ServerExample >>\n", file=fi)
indent = " " * 4
print("\n" + "Servers:", file=fi)
for svr in servers:
print(indent*1 + "Server:", svr.name, file=fi)
print(indent * 2 + "max_concurrency =", svr.max_concurrency, file=fi)
print(indent * 2 + "num_threads =", svr.num_threads, file=fi)
print(indent*2 + "speed =", svr.speed, file=fi)
print(indent * 2 + "avg_process_time =", svr.avg_process_time, file=fi)
print(indent * 2 + "avg_hw_queue_time =", svr.avg_hw_queue_time, file=fi)
print(indent * 2 + "avg_thread_queue_time =", svr.avg_thread_queue_time, file=fi)
print(indent * 2 + "avg_service_time =", svr.avg_service_time, file=fi)
print(indent * 2 + "avg_hw_queue_length =", svr.avg_hw_queue_length, file=fi)
print(indent * 2 + "avg_thread_queue_length =", svr.avg_thread_queue_length, file=fi)
print(indent * 2 + "hw_queue_length =", svr.hw_queue_length, file=fi)
print(indent * 2 + "hw_in_process_count =", svr.hw_in_process_count, file=fi)
print(indent * 2 + "thread_queue_length =", svr.thread_queue_length, file=fi)
print(indent * 2 + "thread_in_use_count =", svr.thread_in_use_count, file=fi)
print(indent*2 + "utilization =", svr.utilization, file=fi)
print(indent*2 + "throughput =", svr.throughput, file=fi)
print(indent*1 + "Group:", grp.name, file=fi)
print(indent*2 + "num_users =", grp.num_users, file=fi)
print(indent*2 + "min_think_time =", grp.min_think_time, file=fi)
print(indent*2 + "max_think_time =", grp.max_think_time, file=fi)
print(indent * 2 + "responded_request_count =", grp.responded_request_count(None), file=fi)
print(indent * 2 + "unresponded_request_count =", grp.unresponded_request_count(None), file=fi)
print(indent * 2 + "avg_response_time =", grp.avg_response_time(), file=fi)
print(indent * 2 + "std_dev_response_time =", grp.std_dev_response_time(None), file=fi)
print(indent*2 + "throughput =", grp.throughput(None), file=fi)
for svc in grp.svcs:
print(indent*2 + svc.svc_name + ":", file=fi)
print(indent * 3 + "responded_request_count =", grp.responded_request_count(svc), file=fi)
print(indent * 3 + "unresponded_request_count =", grp.unresponded_request_count(svc), file=fi)
print(indent * 3 + "avg_response_time =", grp.avg_response_time(svc), file=fi)
print(indent * 3 + "std_dev_response_time =", grp.std_dev_response_time(svc), file=fi)
print(indent*3 + "throughput =", grp.throughput(svc), file=fi)
# %load report_resp_times.py
from typing import TYPE_CHECKING
from collections import OrderedDict, namedtuple
import matplotlib.pyplot as plt
import pandas as pd
if TYPE_CHECKING:
from serversim import UserGroup
Minibatch = namedtuple("Result", ["ts", "counts", "means", "q_95"])
def minibatch_resp_times(time_resolution, grp):
# type: (float, UserGroup) -> Minibatch
times = ((svc_req.time_dict["submitted"] // time_resolution) *
time_resolution
for (_, svc_req) in grp.svc_req_log
if svc_req.is_completed)
vals = (svc_req.time_dict["completed"] - svc_req.time_dict["submitted"]
for (_, svc_req) in grp.svc_req_log
if svc_req.is_completed)
series = pd.Series(vals, index=times)
grouped = series.groupby(level=0)
counts_ser = grouped.count()
ts = counts_ser.index.values
counts = counts_ser.values
means = grouped.mean().values
q_95 = grouped.quantile(.95).values
return Minibatch(ts, counts, means, q_95)
def plot_counts_means_q95(minibatch1, minibatch2):
# type: (Minibatch, Minibatch) -> None
x = minibatch1.ts # should be same as minibatch2.ts
counts1 = minibatch1.counts
counts2 = minibatch2.counts
means1 = minibatch1.means
means2 = minibatch2.means
q1_95 = minibatch1.q_95
q2_95 = minibatch2.q_95
fig = plt.figure(figsize=(10, 9)) # size in inches
grid = plt.GridSpec(2, 3, hspace=0.25)
# Plot counts
axc = fig.add_subplot(grid[0, :-1])
axc.plot(x, counts1, color='b', label="Counts 1")
axc.plot(x, counts2, color='r', label="Counts 2")
axc.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
axc.set_xlabel("Time buckets")
axc.set_ylabel("Throughput")
# Plot response time averages and 95th percentiles
axr = fig.add_subplot(grid[1, :-1])
axr.plot(x, means1, color='b', label="Means 1")
axr.plot(x, q1_95, color='c', label="95th Percentile 1")
axr.plot(x, means2, color='r', label="Means 2")
axr.plot(x, q2_95, color='m', label="95th Percentile 2")
axr.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
axr.set_xlabel("Time buckets")
axr.set_ylabel("Response times")
plt.show()
def compare_scenarios(sc1, sc2):
grp1 = sc1.grp
grp2 = sc2.grp
minibatch1 = minibatch_resp_times(5, grp1)
minibatch2 = minibatch_resp_times(5, grp2)
plot_counts_means_q95(minibatch1, minibatch2)
random.seed(123456)
sc1 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
print_results(**sc1.__dict__)
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
random.setstate(rand_state)
sc2 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1,
server_range1=range(0, 8), server_range2=range(8, 10))
compare_scenarios(sc1, sc2)
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
random.setstate(rand_state)
sc2 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1,
server_range1=range(0, 8), server_range2=range(8, 10))
compare_scenarios(sc1, sc2)
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=720, weight1=5, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
random.setstate(rand_state)
sc2 = simulate_deployment_scenario(num_users=720, weight1=5, weight2=1,
server_range1=range(0, 8), server_range2=range(8, 10))
compare_scenarios(sc1, sc2)
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
random.setstate(rand_state)
sc2 = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1,
server_range1=range(0, 8), server_range2=range(8, 10))
compare_scenarios(sc1, sc2)
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1,
server_range1=range(0, 9), server_range2=range(0, 9))
random.setstate(rand_state)
sc2a = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1,
server_range1=range(0, 7), server_range2=range(7, 9))
random.setstate(rand_state)
sc2b = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1,
server_range1=range(0, 6), server_range2=range(6, 9))
compare_scenarios(sc1, sc2a)
compare_scenarios(sc1, sc2b)
users_curve = [(0, 900), (50, 540), (100, 900), (150, 540)]
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=users_curve, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
random.setstate(rand_state)
sc2 = simulate_deployment_scenario(num_users=users_curve, weight1=2, weight2=1,
server_range1=range(0, 8), server_range2=range(8, 10))
compare_scenarios(sc1, sc2)
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=users_curve, weight1=1, weight2=1,
server_range1=range(0, 9), server_range2=range(0, 9))
random.setstate(rand_state)
sc2a = simulate_deployment_scenario(num_users=users_curve, weight1=1, weight2=1,
server_range1=range(0, 7), server_range2=range(7, 9))
random.setstate(rand_state)
sc2b = simulate_deployment_scenario(num_users=users_curve, weight1=1, weight2=1,
server_range1=range(0, 6), server_range2=range(6, 9))
compare_scenarios(sc1, sc2a)
compare_scenarios(sc1, sc2b)
rand_state = random.getstate()
sc1a = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1,
server_range1=range(0, 12), server_range2=range(0, 12))
random.setstate(rand_state)
sc2a1 = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1,
server_range1=range(0, 9), server_range2=range(9, 12))
random.setstate(rand_state)
sc2a2 = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(10, 12))
random.setstate(rand_state)
sc1b = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1,
server_range1=range(0, 13), server_range2=range(0, 13))
random.setstate(rand_state)
sc2b = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(10, 13))
compare_scenarios(sc1a, sc2a1)
compare_scenarios(sc1a, sc2a2)
compare_scenarios(sc1b, sc2b)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: ServerSim Tutorial
Step3: Printing the simulation results
Step4: Mini-batching, plotting, and comparison of results
Step5: Random number generator seed
Step6: Simulations
Step7: Simulation 1
Step8: Repeating above comparison to illustrate variability of results.
Step9: Conclusions
Step10: Conclusions
Step11: Conclusions
Step12: Compare the results of scenarios 1 and 2a
Step13: Compare the results of scenarios 1 and 2b
Step14: Conclusions
Step15: Simulation 5
Step16: Conclusions
Step17: Compare the results of scenarios 1 and 2a
Step18: Compare the results of scenarios 1 and 2b
Step19: Conclusions
Step20: Compare the results of scenarios 1a and 2a1
Step21: Compare the results of scenarios 1a and 2a2
Step22: Compare the results of scenarios 1b and 2b
|
1,394
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_digits
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import cross_val_score
digits = load_digits()
X, y = digits.data / 16., digits.target
cross_val_score(LogisticRegression(), X, y, cv=5)
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
grid = GridSearchCV(LogisticRegression(), param_grid={'C': np.logspace(-3, 2, 6)})
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
from sklearn.pipeline import make_pipeline
from sklearn.feature_selection import SelectKBest
pipe = make_pipeline(SelectKBest(k=59), LogisticRegression())
pipe.fit(X_train, y_train)
pipe.score(X_test, y_test)
cross_val_score(LogisticRegression(C=.01), X, y == 3, cv=5)
cross_val_score(LogisticRegression(C=.01), X, y == 3, cv=5, scoring="roc_auc")
from sklearn.preprocessing import OneHotEncoder
X = np.array([[15.9, 1], # from Tokyo
[21.5, 2], # from New York
[31.3, 0], # from Paris
[25.1, 2], # from New York
[63.6, 1], # from Tokyo
[14.4, 1], # from Tokyo
])
y = np.array([0, 1, 1, 1, 0, 0])
encoder = OneHotEncoder(categorical_features=[1], sparse=False)
pipe = make_pipeline(encoder, LogisticRegression())
pipe.fit(X, y)
pipe.score(X, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model complexity, overfitting, underfitting
Step2: Scoring metrics
Step3: Data Wrangling
|
1,395
|
<ASSISTANT_TASK:>
Python Code:
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
from subprocess import check_output
print(check_output(["ls", "."]).decode("utf8"))
# Any results you write to the current directory are saved as output.
myTrainDf = pd.read_csv('./train.csv')
myTestDf = pd.read_csv('./test.csv')
# Display some digits to understand how the data is structured
def formatDigit (aLine):
return np.array(aLine).reshape((28,28))
NUM_EXAMPLES = 3
for i in (myTrainDf.shape[0]*np.random.rand(NUM_EXAMPLES)).astype(int):
myImage = formatDigit(myTrainDf.iloc[i][1:])
plt.matshow(myImage, cmap='gray')
plt.colorbar()
plt.title('Example for digit ' + str(myTrainDf.iloc[i][0]))
plt.show()
from sklearn.neural_network import MLPClassifier
myX = myTrainDf[myTrainDf.columns[1:]]
myY = myTrainDf[myTrainDf.columns[0]]
# Use 'adam' solver for large datasets, alpha is the regularization term.
# Display the optimization by showing the cost.
myClf = MLPClassifier(hidden_layer_sizes=25, activation='logistic', solver='adam',
alpha=1e-5, verbose=True)
myClf.fit(myX, myY)
# Get the training error
myPredY = myClf.predict(myX)
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
# Generic function to assess performance in two datasets
def showPerformance (aY, aYpred):
# Ensure np.array
print ('*** Performance Statistics ***')
print ('Accuracy: ', accuracy_score(aY, aYpred))
print ('Precision: ', precision_score(aY, aYpred, average='micro'))
print ('Recall: ', recall_score(aY, aYpred, average='micro'))
print ('F1: ', f1_score(aY, aYpred, average='micro'))
showPerformance(myY, myPredY)
myYtestPred = myClf.predict(myTestDf)
myOutDf = pd.DataFrame(index=myTestDf.index+1, data=myYtestPred)
myOutDf.reset_index().to_csv('submission.csv', header=['ImageId', 'Label'],index=False)
REG_ARRAY = [100, 1, 1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 0]
def splitDataset (aDf, aFrac):
aTrainDf = aDf.sample(frac=aFrac)
aXvalDf = aDf.iloc[[x for x in aDf.index if x not in aTrainDf.index]]
return aTrainDf, aXvalDf
mySampleTrainDf, mySampleXvalDf = splitDataset(myTrainDf, 0.8)
myAccuracyDf = pd.DataFrame(index=REG_ARRAY, columns=['Accuracy'])
for myAlpha in REG_ARRAY:
print ('Training with regularization param ', str(myAlpha))
myClf = MLPClassifier(hidden_layer_sizes=25, activation='logistic', solver='adam', alpha=myAlpha, verbose=False)
myClf.fit(mySampleTrainDf[mySampleTrainDf.columns[1:]], mySampleTrainDf['label'])
myYpred = myClf.predict(mySampleXvalDf[mySampleXvalDf.columns[1:]])
myAccuracyDf.loc[myAlpha, 'Accuracy'] = accuracy_score(mySampleXvalDf['label'], myYpred)
myAccuracyDf
REG_ARRAY = [100, 1, 1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 0]
myAccuracyDf = pd.DataFrame(index=REG_ARRAY, columns=['Accuracy'])
for myAlpha in REG_ARRAY:
print ('Training with regularization param ', str(myAlpha))
myClf = MLPClassifier(hidden_layer_sizes=[400, 400, 100, 25], activation='logistic', solver='adam', alpha=myAlpha, verbose=False)
myClf.fit(mySampleTrainDf[mySampleTrainDf.columns[1:]], mySampleTrainDf['label'])
myYpred = myClf.predict(mySampleXvalDf[mySampleXvalDf.columns[1:]])
myAccuracyDf.loc[myAlpha, 'Accuracy'] = accuracy_score(mySampleXvalDf['label'], myYpred)
myAccuracyDf
myClf = MLPClassifier(hidden_layer_sizes=[400, 400, 100, 25], activation='logistic', solver='adam', alpha=0, verbose=True)
myClf.fit(myTrainDf[myTrainDf.columns[1:]], myTrainDf['label'])
myYtestPred = myClf.predict(myTestDf)
myOutDf = pd.DataFrame(index=myTestDf.index+1, data=myYtestPred)
myOutDf.reset_index().to_csv('submission2.csv', header=['ImageId', 'Label'],index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now that we see how the data is organized, let's use a MLP, with an architecture as the one taught in "Machine Learning" from Stanford in coursera.org to recognize the digits.
Step2: The results are quite good, as expected, now let's make a prediction for the test set.
Step3: Cross-validation set and regularization parameter
Step4: From here one can tell that the default regularization parameter (around 1e-5)
Step5: Let's produce a new output file with no regularization and a complex MLP of 784x400x400x100x25x10
|
1,396
|
<ASSISTANT_TASK:>
Python Code:
import logging
LOG_FILENAME = 'logging_example.log'
logging.basicConfig(
filename=LOG_FILENAME,
level=logging.DEBUG,
)
logging.debug('This message should go to the log file')
with open(LOG_FILENAME, 'rt') as f:
body = f.read()
print('FILE:')
print(body)
import glob
import logging
import logging.handlers
LOG_FILENAME = 'logging_rotatingfile_example.log'
# Set up a specific logger with our desired output level
my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)
# Add the log message handler to the logger
handler = logging.handlers.RotatingFileHandler(
LOG_FILENAME,
maxBytes=20,
backupCount=5,
)
my_logger.addHandler(handler)
# Log some messages
for i in range(20):
my_logger.debug('i = %d' % i)
# See what files are created
logfiles = glob.glob('%s*' % LOG_FILENAME)
for filename in logfiles:
print(filename)
import logging
logging.basicConfig(level=logging.WARNING)
logger1 = logging.getLogger('package1.module1')
logger2 = logging.getLogger('package2.module2')
logger1.warning('This message comes from one module')
logger2.warning('This comes from another module')
import logging
import warnings
logging.basicConfig(
level=logging.INFO,
)
warnings.warn('This warning is not sent to the logs')
logging.captureWarnings(True)
warnings.warn('This warning is sent to the logs')
import logging
file_hanlder = logging.FileHandler("file_1.log", mode='a', encoding=None, delay=False)
file_hanlder_2 = logging.FileHandler("file_2.log", mode='a', encoding=None, delay=False)
logger = logging.getLogger("mytestlogger")
logger.addHandler(file_hanlder)
logger.addHandler(file_hanlder_2)
logger.critical("test critical message")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Rotating Log Files
Step2: The most current file is always logging_rotatingfile_example.out, and each time it reaches the size limit it is renamed with the suffix .1. Each of the existing backup files is renamed to increment the suffix (.1 becomes .2, etc.) and the .5 file is erased.
Step3: Integration with the warnings Module
Step4: Multiple Handlers
|
1,397
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import itertools
import re
# numpy imports
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def hexbyte(x):
return "{:02X}".format(x)
def binbyte(x):
return "{:08b}".format(x)
def tohex(by, sep=" "):
return sep.join(hexbyte(x) for x in by)
def tobin(by, sep=" "):
return sep.join(binbyte(x) for x in by)
def hexline(by):
if len(by) <= 24:
return tohex(by)
else:
return tohex(by[:20]) + "..." + tohex(by[-4:])
def takebyn(by, n=8):
for i in range(0, len(by), n):
yield by[i:i+n]
def itertaken(seq, n):
itr = iter(seq)
try:
while True:
group = []
for _ in range(n):
group.append(next(itr))
yield group
except StopIteration:
if group:
yield group
def printhex(by, n=8, group=1):
grouped = ("".join(hexbyte(x) for x in g) for g in takebyn(by, group))
for x in itertaken(grouped, n):
print(" ".join(x))
with open('data/syxout.txt') as hexdata:
lines = [bytearray.fromhex(line.strip()) for line in hexdata]
for line in lines:
print(hexline(line))
lb = lines[0]
ls = lines[-4]
tohex(lb[:7])
tohex(lb[7:11])
tohex(lb[11:14])
tohex(lb[-2:])
print([sum(line[7:-1]) % 128 for line in lines])
def unpackseven(by):
# Ignore the first bit, which is always zero.
# The most signifcant bit comes first
value = 0
for i, b in enumerate(reversed(by)):
value |= b << (i*7)
return value
def unpacksevenhex(hexstring):
return "{:#x}".format(unpackseven(bytearray.fromhex(hexstring)))
unpacksevenhex('10 00')
len(lb[14:-2])
unpacksevenhex('08 68')
unpackseven(ls[7:9]) == len(ls[14:-2])
len(ls[14:-2])
payloads = [line[14:-2] for line in lines[:-3]]
payload = bytearray().join(payloads)
len(payload)/8
pays = np.array(payload).reshape((9613,8))
plt.plot(pays[:,7], 'r.')
np.asarray(np.unique(pays[:,7], return_counts=True)).T
tobin(pays[pays[:,7]==65,:][2])
tobin(np.unique(pays[:,7]))
def reconstitute(by):
blob = by[:7]
for i in range(7):
bitmask = 0b01000000 >> i
blob[i] = blob[i] | ((by[7] & bitmask) << i+1)
return blob
def reconstitute_all(by):
return bytearray().join(reconstitute(g) for g in takebyn(by, 8))
payload8 = reconstitute_all(payload)
map(tohex, re.findall(r'\xF0[\x00-\x7F]*[\x80-\xFF]', payload8))
map(tohex, re.findall(r'\x43\x76\x1A\x04.{2}', payload8))
len(payload8)
printhex(payload8[:0x15a], 16)
printhex(payload8[0x15a:0x162])
track_bitfields = slice(0x15d, 0x162)
print(tobin(payload8[track_bitfields], '\n'))
printhex(payload8[0x162:0x1f3], 16)
printhex(payload8[0x1f3:0x22f], 12)
payload8[0x1f3:0x22f].decode('ascii')
beginning_blocks = slice(0x22f,0x24d)
printhex(payload8[beginning_blocks], 6)
next_blocks = slice(0x24d,0x2cf)
printhex(payload8[next_blocks], 16)
printhex(payload8[0x2cf:0x2d5])
payload8[0x2cf:0x2d5].decode('ascii')
printhex(payload8[0x106d5:])
payload8[0x106d5:].decode('ascii')
(0x106d5-0x2d5)/0x82
printhex(payload8[0x2d5:0x2d5+8])
payload8[0x2d5:0x2d5+8]
def chunk_offset(n):
if 0x01 <= n <= 0x82:
return 0xd5 + (0x200*n)
else:
raise ValueError("Invalid chunk", n)
def block_peek(n, length=4):
offset = chunk_offset(n)
return payload8[offset:offset+length]
def print_block_peek(n):
bits = block_peek(n)
print(u"Block 0x{:02x} at 0x{:05x}: {} - {}".format(n, chunk_offset(n), tohex(bits),
bits.decode('ascii', 'replace')))
track_blocks = [x for x in payload8[beginning_blocks] if x != 0xFF]
for n in track_blocks:
print_block_peek(n)
print_block_peek(0x5e)
printhex(payload8[:0x15d], 16)
printhex(payload8[0x162:0x1f3], 16)
def unpackeight(by):
# big-endian.
total = 0
for i, b in enumerate(reversed(by)):
total |= b << (i*8)
return total
def unpackwords(by, n=2):
# probably not the way to do it, but eh.
return [unpackeight(x) for x in takebyn(by, n)]
mystery2 = unpackwords(payload8[0x163:0x1f3], 4)
plt.bar(np.arange(len(mystery2)), mystery2)
print(tobin(payload8[0x15d:0x162], '\n')) #track_bitfields
print('---')
printhex(payload8[0x162:0x167])
print('---')
printhex(payload8[0x167:0x17b],1,4)
print('---')
printhex(payload8[0x17b:0x1f3],6,4)
printhex(payload8[beginning_blocks], 6)
printhex(payload8[:0x15d], 16)
printhex(payload8[:0x19], 16)
printhex(payload8[0x19:0x159], 16)
printhex(payload8[0x159:0x15d], 16)
mystery_region = payload8[0x019:0x159]
mystery_numbers = unpackwords(mystery_region)
plt.bar(np.arange(160), mystery_numbers, width=1, linewidth=0)
printhex(mystery_numbers[:-30], 10)
print()
printhex(payload8[next_blocks], 10)
printhex(payload8[0x0:0x19], 32)
printhex(payload8[0x159:0x15d])
!diff -y --width=100 --suppress-common-lines data/syxout.txt data/syxout1.txt
with open('data/syxout1.txt') as newdump:
newdumpline = bytearray.fromhex(newdump.readline().strip())
newpayload8 = reconstitute_all(newdumpline[14:-2])
printhex(payload8[:0x19], 32)
printhex(newpayload8[:0x19], 32)
print(binbyte(payload8[0]))
print(binbyte(newpayload8[0]))
payload8[0x19:0x15d] == newpayload8[0x19:0x15d]
print(tobin(newpayload8[0x15d:0x162], '\n')) #track_bitfields
print('---')
printhex(newpayload8[0x162:0x167])
print('---')
printhex(newpayload8[0x167:0x17b],1,4)
print('---')
printhex(newpayload8[0x17b:0x1f3],6,4)
printhex(newpayload8[beginning_blocks], 6)
print('---')
printhex(newpayload8[next_blocks], 16)
payload8[0x2cf:0x700] == newpayload8[0x2cf:]
for line in lines[-2:]:
print(hexline(line))
otsline = lines[-2]
otspayload = otsline[14:-2]
unpackseven(otsline[7:9])
unpackseven(otsline[9:11])
len(otspayload)
otspayload8 = reconstitute_all(otspayload)
len(otspayload8)
hexline(otspayload8)
print(tohex(otspayload8[:4]))
print(tohex(otspayload8[-6:]))
otsgroups = list(takebyn(otspayload8[4:-6], 44))
for group in otsgroups:
print(hexline(group))
def grab_4_2(filename):
with open(filename) as hexdata:
message = bytearray.fromhex(hexdata.readlines()[-2].strip())
return reconstitute_all(message[14:-2])[4+11*44:4+12*44]
# if it's stupid and it works, then... it's still stupid, but hey, at least it works.
old_42 = otsgroups[11]
new_42 = grab_4_2('data/syxout2.txt')
newer_42 = grab_4_2('data/syxout3.txt')
def columnise(some_lines, height):
for i in range(height):
print("".join(some_lines[j] for j in range(i, len(some_lines), height)))
print("|ofst| old | new | newer | "*3)
columnise(["| {0:02X} | {1:02X} {1:3d} | {2:02X} {2:3d} | {3:02X} {3:3d} | ".format(i, oldv, newv, newerv) for i, (oldv, newv, newerv) in enumerate(zip(old_42, new_42, newer_42))], 16)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Picking at the message structure
Step2: Header
Step3: 'SS' Size bytes.
Step4: three 'ZZ' running total bytes.
Step5: the 'KK' checksum and end-of-message flag F7.
Step6: This should, and I say should, be of a similar format, so...
Step7: It worked, except for the two end-of-section messages, which don't have the checksum byte.
Step8: 0x800, or 2048 bytes of encoded data seems to be the maximum.
Step9: Yep.
Step10: From staring at the 'payload' in a hex editor, I have recognised some patterns. I think that every 8th byte is some sort of 'parity' byte or something. Ignoring these bytes you can see the ascii for 'YAMAHA' and 'MTrk' and 'PresetStyle\x00'. So that's a start.
Step11: What Is That Eighth Byte?
Step12: The Solution
Step13: Note that whenever an F0 occurs it is always followed by F7
Step14: It sure looks like there are some system exclusive messages in here! (Stored in MIDI file format, of course.)
Step15: Figuring Out The File Structure
Step16: It looks like there are 67291 bytes of decoded data instead of the 66766 bytes in the PSR-225.
Step17: I have no idea what the first 0x15A bytes are. Maybe it's a bunch of 16-bit numbers?
Step18: The DGX-505 has 5 UserSongs compared to the PSR-225's three. Each user song has five normal tracks and one chord/accompaniment track (track A).
Step19: The five bytes from 0x15d to 0x161 appear to correspond to the five usersongs, with the low 5 bits of each representing the active tracks on each (from least to most significant bit, track 1 - 2 - 3 - 4 - 5 - A).
Step20: Following are some bytes of more unknown stuff which could be 32 bit integers?
Step21: ... followed by five repeats of ascii 'PresetStyle' and 00. The PSR-225 only has three of these (starting from 0x00007), so this has probably got something to do with each song.
Step22: Immediately following that, we have what appears to be the 30 bytes from 0x22f through 0x24c indicating the beginning blocks, corresponding to 0x0002B-0x0003F on the PSR-225.
Step23: Running from 0x24d to 0x2ce is the table of 0x82 (=130) next blocks, corresponding to 0x00040-0x000C1 on the PSR-225.
Step24: At 0x2cf, we find 'PK0001', which corresponds to the PSR-225's 'PK9801' at 0x000C2, and marks the beginning of the block data at 0x2d5.
Step25: The final six bytes, spelling 'PK0001' again from 0x106d5 to 0x106da, mark the end of the block data and correspond to 0x104C8-0x104CD on the PSR-225.
Step26: Like the PSR-225, the blocks are 0x200 = 512 bytes in size.
Step27: And we can see the start of an MTrk chunk.
Step28: We can see that the blocks marked as the beginning blocks for each track do, indeed, contain the start of the tracks.
Step29: We can also see that some blocks have data left over from previous recordings.
Step30: We've accounted for pretty much everything found in the PSR-225's song dump format, except for the very first byte, offset 0x00000, which is supposed to be a bitfield for the songs in use. If the same pattern is used for the DGX-505, then we should expect to see a 00011111 or 1F byte somewhere, as all five songs were in use.
Step31: There's definitely a pattern
Step32: It looks like after the track-active bitfields comes five zero bytes, then five 32-bit numbers that seem to correspond to the five songs, then 30 32-bit numbers that correspond to the tracks on the songs, with values of 00000000 for inactive tracks and other numbers (lengths?) for the active tracks. The five numbers seem to be the largest (longest?) value of the tracks for the corresponding song. Looking through the values, they seem to be roughly the duration of the tracks, in measures.
Step33: The Final Mystery Region
Step34: Let's describe some patterns.
Step35: From 0x19 to 0x11c, we get 320 bytes of some increasing and decreasing patterns
Step36: And then four more bytes of whatever. I'm baffled, really.
Step37: Staring at this, we can see four regions that run from indexes 0-49, 50-99, 100-129 and 130-159. Note that there are 160 numbers here, and 130 blocks in the file system, and 30 tracks. I'm guessing that this has something to do with the blocks just by the numbers here, but I don't know what. Probably some internal file system thing
Step38: And that still doesn't explain 0x0-0x18, 0x159-0x15c. More experimentation required.
Step39: More Experimentation
Step40: The beginning mystery bytes are the same, except for the first byte, which does indeed seem to be the song usage bitfield, with the least significant bit = user song 1 etc.
Step41: The mystery region is the same for both files.
Step42: Song 4 is not in use anymore, neither should its tracks be. Their durations are all zero as well.
Step43: Blocks 07, 08, 09 and 0B have been marked empty for reuse.
Step44: The rest of the data is the same.
Step45: Assuming the messages are of roughly the same format, we have
Step46: The PSR-225 had 1576 encoded bytes (788 bytes decoded) of OTS data. The DGX-505 has 816 encoded bytes (714 bytes decoded), but the two length values differ by two, 06 30 (=816) and 06 2E (=814).
Step47: The data begins with 50 53 52 03 and ends with 50 53 52 03 00 00. Perhaps those two extra bytes are the reason for the differing sizes. (As an aside, 50 53 52 is ASCII for 'PSR', but the PSR-225 doesn't have them.)
Step48: Like the PSR-225, the settings are stored by button and then by bank.
|
1,398
|
<ASSISTANT_TASK:>
Python Code:
# Load the data again. Keep air quality data, drop the index column
# and any missing data columns.
df = pd.read_csv(
'https://vincentarelbundock.github.io/Rdatasets/csv/ISLR/Default.csv'
).iloc[:,1:].dropna()
# Recode strings to numeric.
df['default'] = np.where(df['default']=='Yes', 1, 0)
df['student'] = np.where(df['student']=='Yes', 1, 0)
names = df.columns
df = pd.DataFrame(preprocessing.scale(df), columns=names)
# Define the training and test sizes.
trainsize = int(df.shape[0] / 2)
df_test = df.iloc[trainsize:, :].copy()
df_train = df.iloc[:trainsize, :].copy()
Y_train = df_train['income'].values.reshape(-1, 1)
X_train = df_train.loc[:, ~(df_train.columns).isin(['income'])]
# Make some new features to capture potential quadratic and cubic
# relationships between solar radiation and day or temperature.
df_train['balance_student'] = df_train['balance'] * df_train['student']
df_train['balance_default'] = df_train['balance'] * df_train['default']
df_train['student_default'] = df_train['student'] * df_train['default']
df_train['balance_sqrt'] = (df_train['balance'] + 100) ** .5
df_train['balance2'] = (df_train['balance'] + 100) ** 2
df_train['balance3'] = (df_train['balance'] + 100) ** 3
X_train2 = df_train.loc[:, ~(df_train.columns).isin(['income'])]
# Test the simpler model with smaller coefficients.
Y_test = df_test['income'].values.reshape(-1, 1)
X_test = df_test.loc[:, ~(df_test.columns).isin(['income'])]
# Test the more complex model with larger coefficients.
df_test['balance_student'] = df_test['balance'] * df_test['student']
df_test['balance_default'] = df_test['balance'] * df_test['default']
df_test['student_default'] = df_test['student'] * df_test['default']
df_test['balance_sqrt'] = (df_test['balance'] + 100) ** .5
df_test['balance2'] = (df_test['balance'] + 100) ** 2
df_test['balance3'] = (df_test['balance'] + 100) ** 3
X_test2 = df_test.loc[:, ~(df_test.columns).isin(['income'])]
# Small number of parameters.
lass = linear_model.Lasso(alpha=.35)
lassfit = lass.fit(X_train, Y_train)
print('R² for the model with few features:')
print(lass.score(X_train, Y_train))
origparams = np.append(lassfit.coef_, lassfit.intercept_)
print('\nParameter estimates for the model with few features:')
print(origparams)
# Large number of parameters.
lassBig = linear_model.Lasso(alpha=.35)
lassBig.fit(X_train2, Y_train)
print('\nR² for the model with many features:')
print(lassBig.score(X_train2, Y_train))
origparams = np.append(lassBig.coef_, lassBig.intercept_)
print('\nParameter estimates for the model with many features:')
print(origparams)
print(lass.score(X_test, Y_test))
print(lassBig.score(X_test2, Y_test))
# Suppress annoying harmless error.
import warnings
warnings.filterwarnings(action="ignore")
# Your code here
# WHY DO WE CALL LAMBDA, ALPHA?! THIS IS WORTH YELLING ABOUT. :-)
from sklearn.model_selection import cross_val_score
from sklearn import linear_model
interval = 0.1
alpha_range = np.arange(0,10,interval)
Y = Y_train
X = X_train2
# search for an optimal correlation value for Linear Models
lasso_scores = []
ridge_scores = []
for alphaVal in alpha_range:
lassBig = linear_model.Lasso(alpha=alphaVal)
lasso_score = cross_val_score(lassBig, X, Y, cv=10)
lasso_scores.append(lasso_score.mean())
ridgeBig = linear_model.Ridge(alpha=alphaVal, fit_intercept=False)
ridge_score = cross_val_score(ridgeBig, X, Y, cv=10)
ridge_scores.append(ridge_score.mean())
# plot the performance of Lasso and Ridge Regression against the value of alpha
plt.plot(alpha_range, lasso_scores, label = "Lasso Regression")
plt.plot(alpha_range, ridge_scores, label = "Rid")
plt.xlabel("Value of Alpha")
plt.ylabel("Cross-Validated Regression Score")
plt.legend()
plt.title("Optimal Alpha Values for Lasso and Ridge Regression")
plt.show()
ridgeregrBig = linear_model.Ridge(alpha=10, fit_intercept=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Checking predictive power using the test set
Step2: Regularization parameter
Step3: Summary
|
1,399
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
data = np.array([[0.1, 0.2, 0.3], [0.8, 0.9, 1.0], [1.5, 1.6, 1.7],])
layer = layers.Normalization()
layer.adapt(data)
normalized_data = layer(data)
print("Features mean: %.2f" % (normalized_data.numpy().mean()))
print("Features std: %.2f" % (normalized_data.numpy().std()))
data = [
"ξεῖν᾽, ἦ τοι μὲν ὄνειροι ἀμήχανοι ἀκριτόμυθοι",
"γίγνοντ᾽, οὐδέ τι πάντα τελείεται ἀνθρώποισι.",
"δοιαὶ γάρ τε πύλαι ἀμενηνῶν εἰσὶν ὀνείρων:",
"αἱ μὲν γὰρ κεράεσσι τετεύχαται, αἱ δ᾽ ἐλέφαντι:",
"τῶν οἳ μέν κ᾽ ἔλθωσι διὰ πριστοῦ ἐλέφαντος,",
"οἵ ῥ᾽ ἐλεφαίρονται, ἔπε᾽ ἀκράαντα φέροντες:",
"οἱ δὲ διὰ ξεστῶν κεράων ἔλθωσι θύραζε,",
"οἵ ῥ᾽ ἔτυμα κραίνουσι, βροτῶν ὅτε κέν τις ἴδηται.",
]
layer = layers.TextVectorization()
layer.adapt(data)
vectorized_text = layer(data)
print(vectorized_text)
vocab = ["a", "b", "c", "d"]
data = tf.constant([["a", "c", "d"], ["d", "z", "b"]])
layer = layers.StringLookup(vocabulary=vocab)
vectorized_data = layer(data)
print(vectorized_data)
from tensorflow import keras
from tensorflow.keras import layers
# Create a data augmentation stage with horizontal flipping, rotations, zooms
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
)
# Load some data
(x_train, y_train), _ = keras.datasets.cifar10.load_data()
input_shape = x_train.shape[1:]
classes = 10
# Create a tf.data pipeline of augmented images (and their labels)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.batch(16).map(lambda x, y: (data_augmentation(x), y))
# Create a model and train it on the augmented image data
inputs = keras.Input(shape=input_shape)
x = layers.Rescaling(1.0 / 255)(inputs) # Rescale inputs
outputs = keras.applications.ResNet50( # Add the rest of the model
weights=None, input_shape=input_shape, classes=classes
)(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy")
model.fit(train_dataset, steps_per_epoch=5)
# Load some data
(x_train, y_train), _ = keras.datasets.cifar10.load_data()
x_train = x_train.reshape((len(x_train), -1))
input_shape = x_train.shape[1:]
classes = 10
# Create a Normalization layer and set its internal state using the training data
normalizer = layers.Normalization()
normalizer.adapt(x_train)
# Create a model that include the normalization layer
inputs = keras.Input(shape=input_shape)
x = normalizer(inputs)
outputs = layers.Dense(classes, activation="softmax")(x)
model = keras.Model(inputs, outputs)
# Train the model
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
model.fit(x_train, y_train)
# Define some toy data
data = tf.constant([["a"], ["b"], ["c"], ["b"], ["c"], ["a"]])
# Use StringLookup to build an index of the feature values and encode output.
lookup = layers.StringLookup(output_mode="one_hot")
lookup.adapt(data)
# Convert new test data (which includes unknown feature values)
test_data = tf.constant([["a"], ["b"], ["c"], ["d"], ["e"], [""]])
encoded_data = lookup(test_data)
print(encoded_data)
# Define some toy data
data = tf.constant([[10], [20], [20], [10], [30], [0]])
# Use IntegerLookup to build an index of the feature values and encode output.
lookup = layers.IntegerLookup(output_mode="one_hot")
lookup.adapt(data)
# Convert new test data (which includes unknown feature values)
test_data = tf.constant([[10], [10], [20], [50], [60], [0]])
encoded_data = lookup(test_data)
print(encoded_data)
# Sample data: 10,000 random integers with values between 0 and 100,000
data = np.random.randint(0, 100000, size=(10000, 1))
# Use the Hashing layer to hash the values to the range [0, 64]
hasher = layers.Hashing(num_bins=64, salt=1337)
# Use the CategoryEncoding layer to multi-hot encode the hashed values
encoder = layers.CategoryEncoding(num_tokens=64, output_mode="multi_hot")
encoded_data = encoder(hasher(data))
print(encoded_data.shape)
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Create a TextVectorization layer
text_vectorizer = layers.TextVectorization(output_mode="int")
# Index the vocabulary via `adapt()`
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(None,), dtype="int64")
x = layers.Embedding(input_dim=text_vectorizer.vocabulary_size(), output_dim=16)(inputs)
x = layers.GRU(8)(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Instantiate TextVectorization with "multi_hot" output_mode
# and ngrams=2 (index all bigrams)
text_vectorizer = layers.TextVectorization(output_mode="multi_hot", ngrams=2)
# Index the bigrams via `adapt()`
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),))
outputs = layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Instantiate TextVectorization with "tf-idf" output_mode
# (multi-hot with TF-IDF weighting) and ngrams=2 (index all bigrams)
text_vectorizer = layers.TextVectorization(output_mode="tf-idf", ngrams=2)
# Index the bigrams and learn the TF-IDF weights via `adapt()`
with tf.device("CPU"):
# A bug that prevents this from running on GPU for now.
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),))
outputs = layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 前処理レイヤーを使用する
Step2: adapt()メソッドは、Numpy 配列またはtf.data.Datasetオブジェクトのいずれかを取ります。StringLookupおよびTextVectorizationの場合、文字列のリストを渡すこともできます。
Step3: さらに、適応可能なレイヤーは、コンストラクタ引数または重みの割り当てを介して状態を直接設定するオプションを常に公開します。意図した状態値がレイヤー構築時にわかっている場合、またはadapt()呼び出しの外で計算される場合は、レイヤーの内部計算に依存せずに設定できます。例えば、TextVectorization、StringLookup、または、IntegerLookupレイヤーの外部語彙ファイルがすでに存在する場合、レイヤーのコンストラクタ引数で語彙ファイルへのパスを渡すことにより、それらをルックアップテーブルに直接読み込めます。
Step4: モデルの前またはモデル内のデータの前処理
Step5: 画像分類を最初から行うの例で同様の設定が実際に行われていることを確認できます。
Step6: ワンホットエンコーディングによる文字列カテゴリカルフィーチャのエンコーディング
Step7: ここで、インデックス 0 は、語彙外の値 (adapt() 中に表示されなかった値) 用に予約されていることに注意してください。
Step8: インデックス 0 は欠落している値 (値 0 として指定する必要があります) 用に予約されており、インデックス 1 は語彙外の値 (adapt() 中に表示されなかった値) 用に予約されていることに注意してください) 。これは、IntegerLookup のmask_token と oov_token コンストラクター引数を使用して構成できます
Step9: トークンインデックスのシーケンスとしてテキストをエンコードする
Step10: <a>テキスト分類を最初から行う</a>の例では、Embeddedモードと組み合わされて<code>TextVectorization</code>レイヤーが動作する方法を確認できます。
Step11: TF-IDF 重み付けを使用した ngramの 密な行列としてのテキストのエンコード
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.