text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-2', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)
Step2: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
Step3: 3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes do we have? What are we trying to predict?
Step4: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn import tree
iris = datasets.load_iris()
x = iris.data[:,2:] #attributes
y = iris.target #target variable
dt = tree.DecisionTreeClassifier()
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5)
dt = dt.fit(x_train,y_train)
from sklearn import metrics
import numpy as np
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt)
#Accuracy: Out of all the predicted outcomes (true positive, false positive, true negative, false negative), how many are true positives or negatives?
# a score of .973 means that 97.3 percent of those classifed were true positives and true negatives.
# While 2.7 percent were false positives (said to be true when it was false) or
# false negatives (said to be false when it was true )
#Precision: When the condition was predicted to be true (true positives, false positives),
#how many of those 'true' predictions were true positives
#a score of 1 means that all of the cases predicted to be true were true positives
#a score of .93 means 93 percent of those classified as 1 were in class 1 (tp)
#while 7 percent of those classified as 1 were not in class 1 (fp, we said it was true when it was false)
#Recall: When the actual condition was true,how many were predicted true positives?
#a score of 1 means that all the cases that were actually true were predicted to be true
#a score of .9 means that when the actual condition was true, 90% were predicted to be true, and 10% were predicted
#to be false (false negatives, we said it was false when it was true)
#confusion matrix: rows are the actual condition: class 0, class 1, class 2
#columns are the predicted conditions, predicted_class 0, predicted_class 1, predicted class 2
#so the matrix tells us that when the class was actually 2, we predicted it was class 1 twice
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)
dt = dt.fit(x_train,y_train)
measure_performance(x_test,y_test,dt)
# they're worse, in a way.
# but i suspect that the first model was overfitting the data.
breast_cancer = datasets.load_breast_cancer()
breast_cancer['feature_names']
breast_cancer['target']
x = breast_cancer.data[:,:]
y = breast_cancer.target
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5)
dt = dt.fit(x_train,y_train)
measure_performance(x_test,y_test,dt)
# when benign (0), 16 cases were said to be malignant when they were not
# when malignant (1), 7 cases were said to be benign when they were not
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)
dt = dt.fit(x_train,y_train)
measure_performance(x_test,y_test,dt)
# the model is better.
# when when the actual case was benign, it predicted that it was malignant 4 times
# when the actual case when malignant, it predicted that it was benign once.
# perhaps this is because the data set is larger, and there is a relationship between the size of data set and
# the split between training and testing
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python versions
Step2: Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Step3: Strings
Step4: String objects have a bunch of useful methods; for example
Step5: You can find a list of all string methods in the documentation
Step6: As usual, you can find all the gory details about lists in the documentation
Step7: Loops
Step8: If you want access to the index of each element within the body of a loop, use the built-in enumerate function
Step9: List comprehensions
Step10: You can make this code simpler using a list comprehension
Step11: List comprehensions can also contain conditions
Step12: Dictionaries
Step13: You can find all you need to know about dictionaries in the documentation.
Step14: If you want access to keys and their corresponding values, use the iteritems method
Step15: Dictionary comprehensions
Step16: Sets
Step17: Loops
Step18: Set comprehensions
Step19: Tuples
Step20: Functions
Step21: We will often define functions to take optional keyword arguments, like this
Step22: Classes
Step23: Numpy
Step24: Arrays
Step25: Numpy also provides many functions to create arrays
Step26: Array indexing
Step27: A slice of an array is a view into the same data, so modifying it will modify the original array.
Step28: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing
Step29: Two ways of accessing the data in the middle row of the array. Mixing integer indexing with slices yields an array of lower rank, while using only slices yields an array of the same rank as the original array
Step30: Integer array indexing
Step31: Boolean array indexing
Step32: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Step33: You can read all about numpy datatypes in the documentation.
Step34: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects
Step35: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum
Step36: You can find the full list of mathematical functions provided by numpy in the documentation.
Step37: Broadcasting
Step38: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this
Step39: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Step40: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
Step41: By running this special iPython command, we will be displaying plots inline
Step42: Plotting
Step43: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels
Step44: Subplots
|
<ASSISTANT_TASK:>
Python Code:
def quicksort(arr, depth=0, pos="middle", verbose=False):
if len(arr) <= 1:
if verbose:
print("pos:", pos)
print("depth:", depth)
print("###")
return arr
pivot = arr[int(len(arr) / 2)]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
if verbose:
print("pivot:", pivot)
print("left:", left)
print("middle:", middle)
print("right:", right)
print("pos:", pos)
print("depth:", depth)
print("###")
return quicksort(left, depth+1, "left") + middle + quicksort(right, depth+1, "right")
print(quicksort([3,6,8,10,1,2,1]))
x = 3
print (x, type(x))
print ("Addition:", x + 1) # Addition;
print ("Subtraction:", x - 1) # Subtraction;
print ("Multiplication:", x * 2) # Multiplication;
print ("Exponentiation:", x ** 2) # Exponentiation;
x += 1
print ("Incrementing:", x) # Prints "4"
x *= 2
print ("Exponentiating:", x) # Prints "8"
y = 2.5
print ("Type of y:", type(y)) # Prints "<type 'float'>"
print ("Many values:", y, y + 1, y * 2, y ** 2) # Prints "2.5 3.5 5.0 6.25"
t, f = True, False
print (type(t)) # Prints "<type 'bool'>"
print ("True AND False:", t and f) # Logical AND;
print ("True OR False:", t or f) # Logical OR;
print ("NOT True:", not t) # Logical NOT;
print ("True XOR False:", t != f) # Logical XOR;
hello = 'hello' # String literals can use single quotes
world = "world" # or double quotes; it does not matter.
print (hello, len(hello))
hw = hello + ' ' + world # String concatenation
print (hw) # prints "hello world"
hw12 = '%s %s %d' % (hello, world, 12) # sprintf style string formatting
print (hw12) # prints "hello world 12"
s = "hello"
print ("Capitalized String:", s.capitalize()) # Capitalize a string; prints "Hello"
print ("Uppercase String:", s.upper()) # Convert a string to uppercase; prints "HELLO"
print ("Right justified String with padding of '7':", s.rjust(7)) # Right-justify a string, padding with spaces; prints " hello"
print ("Centered String with padding of '7':", s.center(7)) # Center a string, padding with spaces; prints " hello "
print ("Replace 'l' with '(ell)':", s.replace('l', '(ell)')) # Replace all instances of one substring with another;
# prints "he(ell)(ell)o"
print ("Stripped String:", ' world '.strip()) # Strip leading and trailing whitespace; prints "world"
xs = [3, 1, 2] # Create a list
print (xs, xs[2])
print (xs[-1]) # Negative indices count from the end of the list; prints "2"
xs[2] = 'foo' # Lists can contain elements of different types
print (xs)
xs.append('bar') # Add a new element to the end of the list
print (xs)
x = xs.pop() # Remove and return the last element of the list
print (x, xs)
nums = list(range(5)) # range is a built-in function that creates a list of integers
print (nums) # Prints "[0, 1, 2, 3, 4]"
print (nums[2:4]) # Get a slice from index 2 to 4 (exclusive); prints "[2, 3]"
print (nums[2:]) # Get a slice from index 2 to the end; prints "[2, 3, 4]"
print (nums[:2]) # Get a slice from the start to index 2 (exclusive); prints "[0, 1]"
print (nums[:]) # Get a slice of the whole list; prints ["0, 1, 2, 3, 4]"
print (nums[:-1]) # Slice indices can be negative; prints ["0, 1, 2, 3]"
nums[2:4] = [8, 9] # Assign a new sublist to a slice
print (nums) # Prints "[0, 1, 8, 8, 4]"
animals = ['cat', 'dog', 'monkey']
for animal in animals:
print (animal)
animals = ['cat', 'dog', 'monkey']
for idx, animal in enumerate(animals):
print ('#%d: %s' % (idx + 1, animal))
nums = [0, 1, 2, 3, 4]
squares = []
for x in nums:
squares.append(x ** 2)
print (squares)
nums = [0, 1, 2, 3, 4]
squares = [x ** 2 for x in nums]
print (squares)
nums = [0, 1, 2, 3, 4]
even_squares = [x ** 2 for x in nums if x % 2 == 0]
print (even_squares)
d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data
print ("Value of the dictionary for the key 'cat':", d['cat']) # Get an entry from a dictionary; prints "cute"
print ("Is 'cat' is the dictionary d:", 'cat' in d) # Check if a dictionary has a given key; prints "True"
d['fish'] = 'wet' # Set an entry in a dictionary
print ("Value of the dictionary for the key 'fish':", d['fish']) # Prints "wet"
print (d['monkey']) # KeyError: 'monkey' not a key of d
print ("Get 'monkey' value or default:", d.get('monkey', 'N/A')) # Get an element with a default; prints "N/A"
print ("Get 'fish' value or default:", d.get('fish', 'N/A')) # Get an element with a default; prints "wet"
del d['fish'] # Remove an element from a dictionary
print ("Get 'fish' value or default:", d.get('fish', 'N/A')) # "fish" is no longer a key; prints "N/A"
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal in d:
legs = d[animal]
print ('A %s has %d legs' % (animal, legs))
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal, legs in d.items():
print ('A %s has %d legs' % (animal, legs))
nums = [0, 1, 2, 3, 4]
even_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}
print (even_num_to_square)
animals = {'cat', 'dog'}
print ("Is 'cat' in the set:", 'cat' in animals) # Check if an element is in a set; prints "True"
print ("Is 'fish' in the set:", 'fish' in animals) # prints "False"
animals.add('fish') # Add an element to a set
print ("Is 'fish' in the set:", 'fish' in animals)
print ("What is the length of the set:", len(animals)) # Number of elements in a set;
animals.add('cat') # Adding an element that is already in the set does nothing
print ("What is the length of the set:", len(animals))
animals.remove('cat') # Remove an element from a set
print ("What is the length of the set:", len(animals))
animals = {'cat', 'dog', 'fish'}
for idx, animal in enumerate(animals):
print ('#%d: %s' % (idx + 1, animal))
from math import sqrt
set_comprehension = {int(sqrt(x)) for x in range(30)}
print (set_comprehension)
print (type(set_comprehension))
d = {(x, x + 1): x for x in range(10)} # Create a dictionary with tuple keys
t = (5, 6) # Create a tuple
print (type(t))
print (d[t])
print (d[(1, 2)])
print ("Access the 1st value of Tuple:", t[0])
print ("Access the 2nd value of Tuple:", t[1])
t[0] = 1 # This does NOT work !
t = (1, t[1]) # This DOES work !
print (t)
def sign(x):
if x > 0:
return 'positive'
elif x < 0:
return 'negative'
else:
return 'zero'
for x in [-1, 0, 1]:
print (sign(x))
def hello(name, loud=False):
if loud:
print ('HELLO, %s' % name.upper())
else:
print ('Hello, %s!' % name)
hello('Bob')
hello('Fred', loud=True)
class Greeter:
# Constructor
def __init__(self, name):
self.name = name # Create an instance variable
# Instance method
def greet(self, loud=False):
if loud:
print ('HELLO, %s!' % self.name.upper())
else:
print ('Hello, %s' % self.name)
g = Greeter('Fred') # Construct an instance of the Greeter class
g.greet() # Call an instance method; prints "Hello, Fred"
g.greet(loud=True) # Call an instance method; prints "HELLO, FRED!"
import numpy as np
import warnings
warnings.filterwarnings('ignore') # To remove warnings about "deprecated" or "future" features
a = np.array([1, 2, 3]) # Create a rank 1 array
print (type(a), a.shape, a[0], a[1], a[2])
a[0] = 5 # Change an element of the array
print (a)
b = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array
print (b)
print (b.shape)
print (b[0, 0], b[0, 1], b[1, 0])
a = np.zeros((2,2)) # Create an array of all zeros
print (a)
b = np.ones((1,2)) # Create an array of all ones
print (b)
c = np.full((2,2), 7) # Create a constant array
print (c)
d = np.eye(2) # Create a 2x2 identity matrix
print (d)
e = np.random.random((2,2)) # Create an array filled with random values
print (e)
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
b = a[:2, 1:3]
print (b)
print ("Original Matrix before modification:", a[0, 1])
b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]
print ("Original Matrix after modification:", a[0, 1])
# Create the following rank 2 array with shape (3, 4)
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print (a)
row_r1 = a[1, :] # Rank 1 view of the second row of a
row_r2 = a[1:2, :] # Rank 2 view of the second row of a
row_r3 = a[[1], :] # Rank 2 view of the second row of a
print ("Rank 1 access of the 2nd row:", row_r1, row_r1.shape)
print ("Rank 2 access of the 2nd row:", row_r2, row_r2.shape)
print ("Rank 2 access of the 2nd row:", row_r3, row_r3.shape)
# We can make the same distinction when accessing columns of an array:
col_r1 = a[:, 1]
col_r2 = a[:, 1:2]
print ("Rank 1 access of the 2nd column:", col_r1, col_r1.shape)
print ()
print ("Rank 2 access of the 2nd column:\n", col_r2, col_r2.shape)
a = np.array([[1,2], [3, 4], [5, 6]])
# An example of integer array indexing.
# The returned array will have shape (3,) and
print (a[[0, 1, 2], [0, 1, 0]])
# The above example of integer array indexing is equivalent to this:
print (np.array([a[0, 0], a[1, 1], a[2, 0]]))
# When using integer array indexing, you can reuse the same
# element from the source array:
print (a[[0, 0], [1, 1]])
# Equivalent to the previous integer array indexing example
print (np.array([a[0, 1], a[0, 1]]))
# Create a new array from which we will select elements
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
print (a)
# Create an array of indices
b = np.array([0, 2, 0, 1])
b_range = np.arange(4)
print ("b_range:", b_range)
# Select one element from each row of a using the indices in b
print ("Selected Matrix Values:", a[b_range, b]) # Prints "[ 1 6 7 11]"
# Mutate one element from each row of a using the indices in b
a[b_range, b] += 10 # Only the selected values are modified in the "a" matrix.
print ("Modified 'a' Matrix:\n", a)
a = np.array([[1,2], [3, 4], [5, 6]])
bool_idx = (a > 2) # Find the elements of a that are bigger than 2;
# this returns a numpy array of Booleans of the same
# shape as a, where each slot of bool_idx tells
# whether that element of a is > 2.
print (bool_idx)
# We use boolean array indexing to construct a rank 1 array
# consisting of the elements of a corresponding to the True values
# of bool_idx
print (a[bool_idx])
# We can do all of the above in a single concise statement:
print (a[a > 2])
x = np.array([1, 2]) # Let numpy choose the datatype
y = np.array([1.0, 2.0]) # Let numpy choose the datatype
z = np.array([1, 2], dtype=np.int64) # Force a particular datatype
print (x.dtype, y.dtype, z.dtype)
x = np.array([[1,2],[3,4]], dtype=np.float64)
y = np.array([[5,6],[7,8]], dtype=np.float64)
# Elementwise sum; both produce the array
print (x + y)
print ()
print (np.add(x, y))
# Elementwise difference; both produce the array
print (x - y)
print ()
print (np.subtract(x, y))
# Elementwise product; both produce the array
print (x * y)
print ()
print (np.multiply(x, y))
# Elementwise division; both produce the array
# [[ 0.2 0.33333333]
# [ 0.42857143 0.5 ]]
print (x / y)
print ()
print (np.divide(x, y))
# Elementwise square root; produces the array
# [[ 1. 1.41421356]
# [ 1.73205081 2. ]]
print (np.sqrt(x))
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
v = np.array([9,10])
w = np.array([11, 12])
# Inner product of vectors; both produce 219
print ("v.w 'dot' product:", v.dot(w))
print ("numpy 'dot' product (v,w):", np.dot(v, w))
# Matrix / vector product; both produce the rank 1 array [29 67]
print ("x.v 'dot' product:", x.dot(v))
print ("numpy 'dot' product (x,v):", np.dot(x, v))
# Matrix / matrix product; both produce the rank 2 array
# [[19 22]
# [43 50]]
print ("x.y 'dot' product:\n", x.dot(y))
print ("numpy 'dot' product (x,y):\n", np.dot(x, y))
x = np.array([[1,2],[3,4]])
print ("Sum of all element:", np.sum(x)) # Compute sum of all elements; prints "10"
print ("Sum of each column:", np.sum(x, axis=0)) # Compute sum of each column; prints "[4 6]"
print ("Sum of each row:", np.sum(x, axis=1)) # Compute sum of each row; prints "[3 7]"
print ("Matrix x:\n", x)
print ()
print ("Matrix x transposed:\n", x.T)
v = np.array([[1,2,3]])
print ("Matrix v:\n", v)
print ()
print ("Matrix v transposed:\n", v.T)
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = np.empty_like(x) # Create an empty matrix with the same shape as x
# Add the vector v to each row of the matrix x with an explicit loop
for i in range(4):
y[i, :] = x[i, :] + v
print (y)
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other
print (vv) # Prints "[[1 0 1]
# [1 0 1]
# [1 0 1]
# [1 0 1]]"
y = x + vv # Add x and vv elementwise
print (y)
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = x + v # Add v to each row of x using broadcasting
print (y)
# Compute outer product of vectors
v = np.array([1,2,3]) # v has shape (3,)
w = np.array([4,5]) # w has shape (2,)
# To compute an outer product, we first reshape v to be a column
# vector of shape (3, 1); we can then broadcast it against w to yield
# an output of shape (3, 2), which is the outer product of v and w:
print (np.reshape(v, (3, 1)) * w)
# Add a vector to each row of a matrix
x = np.array([[1,2,3], [4,5,6]])
# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),
# giving the following matrix:
print (x + v)
# Add a vector to each column of a matrix
# x has shape (2, 3) and w has shape (2,).
# If we transpose x then it has shape (3, 2) and can be broadcast
# against w to yield a result of shape (3, 2); transposing this result
# yields the final result of shape (2, 3) which is the matrix x with
# the vector w added to each column. Gives the following matrix:
print ((x.T + w).T)
# Another solution is to reshape w to be a row vector of shape (2, 1);
# we can then broadcast it directly against x to produce the same
# output.
print (x + np.reshape(w, (2, 1)))
# Multiply a matrix by a constant:
# x has shape (2, 3). Numpy treats scalars as arrays of shape ();
# these can be broadcast together to shape (2, 3), producing the
# following array:
print (x * 2)
import matplotlib.pyplot as plt
%matplotlib inline
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
# Plot the points using matplotlib
plt.plot(x, y)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Plot the points using matplotlib
plt.plot(x, y_sin)
plt.plot(x, y_cos)
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine and Cosine')
plt.legend(['Sine', 'Cosine'])
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'awi-cm-1-0-hr', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helper Functions
Step2: Load the model classes
Step3: A class that takes a set of Python dictionaries containing Wi-Fi logging data loaded from JSON files collected by the YanuX Scavenger Android application
Step4: Initialize Input & Output Data Directories and other parameters
Step5: Create the output directory if it doesn't exist
Step6: Load Data from the Input Data Directory
Step7: Wi-Fi Readings
Step8: Store the data into a Pandas Dataframe, in which each Wi-Fi result reading is represented by a single line
Step9: Identify the unique MAC Addresses present in the recorded data. Each one represents a single Wi-Fi Access Point.
Step10: Similarly, store the data into a Pandas Dataframe in which each line represents a single sampling cycle with n different readings for each of the Access Points within range. Those readings are stored as columns along each sample.
Step11: Data Set Statistics
Step12: Number of Unique Mac Addresses
Step13: How often has each Access Point been detected
Step14: How many Wi-Fi results were gathered at each location
Step15: How many APs were detected at each location
Step16: The coordinates of the points where data was captured
Step17: Signal Strength Distribution
Step18: Set a train and test scenario to be used by default when testing.
Step19: Playground
Step20: # Neighbors & Distance Weights
Step21: Metric
Step22: Feature Scaling
Step23: NaN filler values
Step24: Impact of orientation in the results
Step25: Impact of the spacing between reference points in the results
Step26: Impact of the amount of available data in the results
Step27: Save all the data that was collected into an Excel file
Step28: Grid Search - Automatically searching for the best estimator parameters
|
<ASSISTANT_TASK:>
Python Code:
# Python Standard Library
import getopt
import os
import sys
import math
import time
import collections
import random
# IPython
from IPython.display import display
# pandas
import pandas as pd
pd.set_option("display.max_rows", 10000)
pd.set_option("display.max_columns", 10000)
# Matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
from matplotlib.ticker import MultipleLocator
# seaborn
import seaborn as sns
sns.set_style("whitegrid")
sns.despine()
# NumPy
import numpy as np
# SciPy
import scipy as sp
from scipy.stats import gaussian_kde
# StatsModels
import statsmodels.api as sm
# scikit-learn
import sklearn
from sklearn import metrics
from sklearn import preprocessing
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.model_selection import LeaveOneGroupOut
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import cross_val_predict
from sklearn.pipeline import make_pipeline
def experiment_plots(results, save_to=None, figsize=(8, 8)):
fig, axarr = plt.subplots(2, 1, figsize=figsize)
for key, result in results.items():
max_error = math.ceil(result["error"].max())
kde = gaussian_kde(result["error"].values)
X_plot=np.linspace(0, max_error, 1000)
axarr[0].plot(X_plot, kde.evaluate(X_plot), "-", label=key)
axarr[0].set_xlabel("Error (e) in meters (m)")
axarr[0].set_ylabel(r"$F_X(e)$")
axarr[0].xaxis.set_major_locator(MultipleLocator(0.5))
axarr[0].set_xlim(0, result["error"].quantile(q=0.9975))
axarr[0].legend()
for key, result in results.items():
ecdf = sm.distributions.ECDF(result["error"])
x = np.linspace(min(result["error"]), max(result["error"]))
y = ecdf(x)
axarr[1].plot(x, y, label=key)
axarr[1].set_xlabel("Error (e) in meters (m)")
axarr[1].set_ylabel(r"$f_X(e)$")
axarr[1].xaxis.set_major_locator(MultipleLocator(0.5))
axarr[1].yaxis.set_major_locator(MultipleLocator(0.1))
axarr[1].set_xlim(0, result["error"].quantile(q=0.9975))
axarr[1].set_ylim(0)
axarr[1].legend()
fig.tight_layout()
if save_to is not None:
fig.savefig(output_data_directory+"/"+save_to, dpi=300)
plt.show()
def experiment_statistics(result):
statistics = collections.OrderedDict([
("mae", result["error"].abs().mean()),
("rmse", np.sqrt((result["error"]**2).mean())),
("sd", result["error"].std()),
("p50", result["error"].quantile(q=0.50)),
("p75", result["error"].quantile(q=0.75)),
("p90", result["error"].quantile(q=0.90)),
("p95", result["error"].quantile(q=0.95)),
("min", result["error"].min()),
("max", result["error"].max()),
])
return statistics
def knn_experiment(data, test_data, train_cols, coord_cols,
scaler=None, n_neighbors=5, weights="uniform",
algorithm="auto", leaf_size=30, p=2, metric="minkowski",
metric_params=None, n_jobs=1):
result = None
knn = KNeighborsRegressor(n_neighbors=n_neighbors, weights=weights, algorithm=algorithm,
leaf_size=leaf_size, p=p, metric=metric,
metric_params=metric_params, n_jobs=n_jobs)
if scaler is not None:
estimator = make_pipeline(scaler, knn)
else:
estimator = knn
locations = data.groupby(coord_cols).indices.keys()
for coords in locations:
train_data = data[(data[coord_cols[0]] != coords[0]) |
(data[coord_cols[1]] != coords[1])].reset_index(drop=True)
target_values = test_data[(test_data[coord_cols[0]] == coords[0]) &
(test_data[coord_cols[1]] == coords[1])].reset_index(drop=True)
estimator.fit(train_data[train_cols], train_data[coord_cols])
predictions = pd.DataFrame(estimator.predict(target_values[train_cols]), columns=coord_cols)
curr_result = target_values[coord_cols].join(predictions, rsuffix="_predicted")
error = pd.DataFrame((predictions[coord_cols] - curr_result[coord_cols]).apply(np.linalg.norm, axis=1),
columns=["error"])
curr_result = pd.concat([curr_result, error], axis=1)
result = pd.concat([result, curr_result])
return result
def knn_experiment_cv(data, cross_validation, train_cols, coord_cols,
scaler=None, n_neighbors=5, weights='uniform',
algorithm="auto", leaf_size=30, p=2, metric="minkowski",
metric_params=None, n_jobs=1):
result = None
knn = KNeighborsRegressor(n_neighbors=n_neighbors, weights=weights, algorithm=algorithm,
leaf_size=leaf_size, p=p, metric=metric,
metric_params=metric_params, n_jobs=n_jobs)
if scaler is not None:
estimator = make_pipeline(scaler, knn)
else:
estimator = knn
X = data[train_cols]
y = data[coord_cols]
predictions = pd.DataFrame(cross_val_predict(estimator, X, y, cv=cross_validation), columns=coord_cols)
result = y.join(predictions, rsuffix="_predicted")
error = pd.DataFrame((predictions[coord_cols] - result[coord_cols]).apply(np.linalg.norm, axis=1), columns=["error"])
result = pd.concat([result, error], axis=1)
return result
from yanux.cruncher.model.loader import JsonLoader
from yanux.cruncher.model.wifi import WifiLogs
input_data_directory = "data"
output_data_directory = "out"
statistics_excel_writer = pd.ExcelWriter(output_data_directory+"/statistics.xlsx")
if not os.path.exists(output_data_directory):
os.makedirs(output_data_directory)
json_loader = JsonLoader(input_data_directory+"/wifi-fingerprints")
wifi_logs = WifiLogs(json_loader.json_data)
num_samples_per_location = int(len(wifi_logs.wifi_samples()) / len(wifi_logs.locations))
num_samples_per_location
wifi_results_columns = ["filename", "place", "floor", "x", "y", "orientation", "sample_id", "mac_address",
"timestamp", "signal_strength"]
wifi_results = pd.DataFrame(wifi_logs.wifi_results(), columns=wifi_results_columns)
wifi_results.to_csv(output_data_directory + "/wifi_results.csv")
mac_addresses = wifi_results.mac_address.unique()
wifi_samples_columns = ["filename", "place", "floor", "x", "y", "orientation", "sample_id", "timestamp"]
wifi_samples_columns.extend(mac_addresses)
wifi_samples = pd.DataFrame(wifi_logs.wifi_samples(), columns=wifi_samples_columns)
wifi_samples = wifi_samples.sort_values(["filename", "x", "y", "floor", "sample_id"]).reset_index(drop=True)
wifi_samples.to_csv(output_data_directory + "/wifi_samples.csv")
len(wifi_results)
len(wifi_results.mac_address.unique())
wifi_results_mac_address_group = wifi_results.groupby("mac_address")
wifi_results_mac_address_group.size().plot(kind="bar")
wifi_results_mac_address_group.size()
wifi_results_mac_address_group.size().mean()
wifi_results_coord_group = wifi_results.groupby(["x", "y"])
wifi_results_coord_group.size().plot(kind="bar")
wifi_results_coord_group.size()
wifi_results_coord_group.size().describe()
wifi_ap_per_location = wifi_samples.groupby(["x","y"]).min()[wifi_results_mac_address_group.size().keys()].count(axis=1)
wifi_ap_per_location.plot(kind="bar")
wifi_ap_per_location
wifi_ap_per_location.describe()
coords = wifi_results[["x","y"]].drop_duplicates().sort_values(by=["x","y"]).reset_index(drop=True)
coords_plot_size = (min(coords["x"].min(),coords["y"].min()), max(coords["x"].max(),coords["y"].max()))
#TODO: If I end up using it in the document, then I should refactor the plot to use matplotlib directly to tweak a few things.
coords.plot(figsize=(16,5), x="x",y="y", style="o", grid=True, legend=False,
xlim=coords_plot_size, ylim=coords_plot_size,
xticks=np.arange(coords_plot_size[0]-1, coords_plot_size[1]+1, 1),
yticks=np.arange(coords_plot_size[0]-1, coords_plot_size[1]+1, 1)).axis('equal')
wifi_results.hist(column="signal_strength")
train_cols = mac_addresses
coord_cols = ["x","y"]
default_data_scenario = wifi_samples.copy()
default_data_scenario_groups = default_data_scenario["x"].map(str)+","+default_data_scenario["y"].map(str)
n_neighbors=15
weights="distance"
metric="braycurtis"
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
cross_validation = LeaveOneGroupOut()
curr_data = default_data_scenario.fillna(nan_filler)
curr_result = knn_experiment_cv(curr_data,
cross_validation.split(curr_data[mac_addresses],
curr_data[coord_cols],
groups=default_data_scenario_groups),
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
curr_statistics = experiment_statistics(curr_result)
curr_result.to_csv(output_data_directory+"/results-base.csv")
statistics_table = pd.DataFrame([curr_statistics], columns=list(curr_statistics.keys()))
statistics_table.to_csv(output_data_directory+"/statistics-base.csv")
statistics_table.to_excel(statistics_excel_writer, "base")
#show table
display(statistics_table)
#plots
experiment_plots({'Base Example':curr_result})
n_neighbors=np.arange(1,31,1)
weights=["uniform", "distance"]
metric="braycurtis"
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
cross_validation = LeaveOneGroupOut()
curr_data = default_data_scenario.fillna(nan_filler)
# Just a statistics accumulator
statistics = []
for k in n_neighbors:
for w in weights:
curr_result = knn_experiment_cv(curr_data,
cross_validation.split(curr_data[mac_addresses],
curr_data[coord_cols],
groups=default_data_scenario_groups),
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=k,
weights=w,
metric=metric)
curr_statistics = experiment_statistics(curr_result)
curr_statistics["k"] = k
curr_statistics["weights"] = w
statistics.append(curr_statistics)
cols = ["k","weights"] + list(curr_statistics.keys())[:-2]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-neighbors-weights.csv")
statistics_table.to_excel(statistics_excel_writer, "neighbors-weights")
#show table
display(statistics_table.sort_values(cols[3:]))
# Plotting Error statistics
fig, ax = plt.subplots(figsize=(8, 5))
index = n_neighbors
ax.plot(index, statistics_table[statistics_table["weights"] == "uniform"]["mae"].tolist(),
color="b", ls="-", label="Uniform (MAE)")
ax.plot(index, statistics_table[statistics_table["weights"] == "distance"]["mae"].tolist(),
color="r", ls="-", label="Distance (MAE)")
ax.plot(index, statistics_table[statistics_table["weights"] == "uniform"]["rmse"].tolist(),
color="b", ls="--", label="Uniform (RMSE)")
ax.plot(index, statistics_table[statistics_table["weights"] == "distance"]["rmse"].tolist(),
color="r", ls="--", label="Distance (RMSE)")
ax.xaxis.set_major_locator(MultipleLocator(1))
ax.yaxis.set_major_locator(MultipleLocator(0.05))
ax.set_xlabel("Number of Neighbours (k)")
ax.set_ylabel("Error (e) in meters (m)")
plt.legend()
plt.tight_layout()
plt.savefig(output_data_directory+"/plot-neighbors_weights.pdf", dpi=300)
plt.show()
n_neighbors=15
weights="distance"
distance_statistics=["euclidean", "manhattan", "canberra", "braycurtis"]
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
cross_validation = LeaveOneGroupOut()
curr_data = default_data_scenario.fillna(nan_filler)
# Results and statistics accumulators
results = {}
statistics = []
for metric in distance_statistics:
curr_result = knn_experiment_cv(curr_data,
cross_validation.split(curr_data[mac_addresses],
curr_data[coord_cols],
groups=default_data_scenario_groups),
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
results[metric] = curr_result
curr_statistics = experiment_statistics(curr_result)
curr_statistics["metric"] = metric
statistics.append(curr_statistics)
cols = ["metric"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-metric.csv")
statistics_table.to_excel(statistics_excel_writer, "metric")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-metric.pdf")
n_neighbors=15
weights="distance"
metric="braycurtis"
nan_filler= default_data_scenario[mac_addresses].min().min()*1.001
cross_validation = LeaveOneGroupOut()
scalers = {"No Scaling": None,
"Rescaling": preprocessing.MinMaxScaler(),
"Standardization": preprocessing.StandardScaler()}
# Results and statistics accumulators
results = {}
statistics = []
for scaler_name, scaler in scalers.items():
curr_data = default_data_scenario.fillna(nan_filler)
curr_result = knn_experiment_cv(curr_data,
cross_validation.split(curr_data[mac_addresses],
curr_data[coord_cols],
groups=default_data_scenario_groups),
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
results[scaler_name] = curr_result
curr_statistics = experiment_statistics(results[scaler_name])
curr_statistics["scaler"] = scaler_name
statistics.append(curr_statistics)
cols = ["scaler"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-feature_scaling.csv")
statistics_table.to_excel(statistics_excel_writer, "feature_scaling")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-feature_scaling.pdf")
n_neighbors=15
weights="distance"
metric="braycurtis"
min_rssi_value = default_data_scenario[mac_addresses].min().min()
nan_fillers = [min_rssi_value,min_rssi_value*1.001,min_rssi_value*1.010,min_rssi_value*1.100,min_rssi_value*1.500]
scaler = preprocessing.StandardScaler()
cross_validation = LeaveOneGroupOut()
# Results and statistics accumulators
results = {}
statistics = []
for nf in nan_fillers:
curr_data = default_data_scenario.fillna(nf)
curr_result = knn_experiment_cv(curr_data,
cross_validation.split(curr_data[mac_addresses],
curr_data[coord_cols],
groups=default_data_scenario_groups),
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
results[nf] = curr_result
curr_statistics = experiment_statistics(curr_result)
curr_statistics["nan_filler"] = nf
statistics.append(curr_statistics)
cols = ["nan_filler"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-nan_filler.csv")
statistics_table.to_excel(statistics_excel_writer, "nan_filler")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-nan_filler.pdf")
filename_prefixes = ["left-to-right-point", "right-to-left-point"]
filename_prefix_data_scenarios = {}
#filename_prefix_data_scenarios["all"] = default_data_scenario
for filename_prefix in filename_prefixes:
filename_prefix_data_scenarios[filename_prefix] = default_data_scenario[wifi_samples["filename"].str.startswith(filename_prefix)].reset_index(drop=True)
filename_prefix_test_data_scenarios = {}
filename_prefix_test_data_scenarios["all"] = default_data_scenario
for filename_prefix in filename_prefixes:
filename_prefix_test_data_scenarios[filename_prefix] = default_data_scenario[wifi_samples["filename"].str.startswith(filename_prefix)].reset_index(drop=True)
n_neighbors=15
weights="distance"
metric="braycurtis"
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
# Results and statistics accumulators
results = {}
statistics = []
for train_data_keys, train_data in filename_prefix_data_scenarios.items():
for test_data_keys, test_data in filename_prefix_test_data_scenarios.items():
curr_data = train_data.fillna(nan_filler)
curr_test_data = test_data.fillna(nan_filler)
curr_result = knn_experiment(curr_data,
curr_test_data,
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
label = "Train: "+train_data_keys+" Test: "+test_data_keys
results[label] = curr_result
curr_statistics = experiment_statistics(curr_result)
curr_statistics["orientation"] = label
statistics.append(curr_statistics)
cols = ["orientation"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-orientation.csv")
statistics_table.to_excel(statistics_excel_writer, "orientation")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-orientation.pdf")
subset_reference_points_scenarios = {}
coords_indices = default_data_scenario.groupby(coord_cols).indices
odd_coords_keys = list(coords_indices.keys())[0::2]
odd_ids = []
for key in odd_coords_keys:
odd_ids.extend(coords_indices[key])
even_coords_keys = list(coords_indices.keys())[1::2]
even_ids = []
for key in even_coords_keys:
even_ids.extend(coords_indices[key])
subset_reference_points_scenarios["odd"] = default_data_scenario.loc[odd_ids].reset_index(drop=True)
subset_reference_points_scenarios["even"] = default_data_scenario.loc[even_ids].reset_index(drop=True)
subset_reference_points_scenarios["all"] = default_data_scenario
n_neighbors=15
weights="distance"
metric="braycurtis"
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
# Results and statistics accumulators
results = {}
statistics = []
for train_data_keys, train_data in subset_reference_points_scenarios.items():
curr_data = train_data.fillna(nan_filler)
curr_test_data = default_data_scenario.fillna(nan_filler)
curr_result = knn_experiment(curr_data,
curr_test_data,
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
results[train_data_keys] = curr_result
curr_statistics = experiment_statistics(curr_result)
curr_statistics["reference_points_spacing"] = train_data_keys
statistics.append(curr_statistics)
cols = ["reference_points_spacing"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-reference_points_spacing.csv")
statistics_table.to_excel(statistics_excel_writer, "reference_points_spacing")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-reference_points_spacing.pdf")
n_neighbors=15
weights="distance"
metric="braycurtis"
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
partial_data = [0.9, 0.7, 0.5, 0.3, 0.1]
repetitions = 5
train_data = default_data_scenario[mac_addresses].copy()
target_values = default_data_scenario[coord_cols].copy()
target_values["label"] = default_data_scenario["x"].map(str) + "," + default_data_scenario["y"].map(str)+ "," + default_data_scenario["filename"].map(str)
# Results and statistics accumulators
results = {}
statistics = []
for partial in partial_data:
curr_result = pd.DataFrame()
for repetition in range(repetitions):
X_train, X_test, y_train, y_test = train_test_split(train_data,
target_values,
test_size=1-partial,
stratify=target_values["label"].values)
#train data
train_split_data = pd.concat([y_train, X_train], axis=1).reset_index(drop=True)
#test data
#test_split_data = pd.concat([y_test, X_test], axis=1).reset_index(drop=True)
test_split_data = default_data_scenario
curr_data = train_split_data.fillna(nan_filler)
curr_test_data = test_split_data.fillna(nan_filler)
curr_result = curr_result.append(knn_experiment(curr_data,
curr_test_data,
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric), ignore_index=True)
results[partial] = curr_result
curr_statistics = experiment_statistics(curr_result)
curr_statistics["partial_data"] = partial
statistics.append(curr_statistics)
cols = ["partial_data"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-partial_data.csv")
statistics_table.to_excel(statistics_excel_writer, "partial_data")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-partial_data.pdf")
statistics_excel_writer.save()
k_neighbors_values = range(1,31,1)
weights_values = [
"uniform",
"distance"
]
metric_values = [
"euclidean",
"manhattan",
"canberra",
"braycurtis"
]
algorithm_values = ["brute"]
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
curr_data = default_data_scenario.fillna(nan_filler)
param_grid = {
"kneighborsregressor__n_neighbors": list(k_neighbors_values),
"kneighborsregressor__weights": weights_values,
"kneighborsregressor__metric": metric_values,
"kneighborsregressor__algorithm": algorithm_values,
}
scaler = preprocessing.StandardScaler()
cross_validation = LeaveOneGroupOut()
estimator = make_pipeline(preprocessing.StandardScaler(), KNeighborsRegressor())
grid = GridSearchCV(estimator,
param_grid=param_grid,
cv=cross_validation,
n_jobs=-1,
scoring=sklearn.metrics.make_scorer(sklearn.metrics.mean_squared_error,
greater_is_better=False,
multioutput="uniform_average"))
grid.fit(curr_data[mac_addresses], curr_data[coord_cols], groups=default_data_scenario_groups)
print("Best parameters set found on development set:")
print(grid.best_params_)
print("Grid scores on development set:")
gridcv_results = pd.DataFrame(grid.cv_results_)
gridcv_results[['mean_test_score', 'std_test_score', 'params']]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: # Creación y manipulación de tensores
Step2: ## Suma de vectores
Step3: ### Formas de tensores
Step4: ### Emisión
Step5: ## Producto de arreglos
Step6: ## Cambio de formas de tensores
Step7: También puedes usar tf.reshape para cambiar la cantidad de dimensiones (la "clasificación") del tensor.
Step8: ### Ejercicio n.º 1
Step9: ### Solución
Step10: ## Variables, Inicialización y asignación
Step11: Una particularidad de TensorFlow es que la inicialización de variables no es automática. Por ejemplo, el siguiente bloque generará un error
Step12: La forma más fácil de inicializar una variable es invocando a global_variables_initializer. Ten en cuenta el uso de Session.run(), que es prácticamente equivalente a eval().
Step13: Una vez inicializadas, las variables mantendrán su valor en la misma sesión (sin embargo, al iniciar una nueva sesión, deberás volver a inicializarlas)
Step14: Para cambiar el valor de una variable, usa el operando assign. Ten en cuenta que, con solo crear el operando assign, no se obtendrá ningún efecto. Al igual que con la inicialización, deberás ejecutar el operando de asignación para actualizar el valor de la variable
Step15: Hay muchos más temas sobre variables que no tratamos, como la carga y el almacenamiento. Para obtener más información, consulta los documentos de TensorFlow.
Step16: ### Solución
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import tensorflow as tf
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create another six-element vector. Each element in the vector will be
# initialized to 1. The first argument is the shape of the tensor (more
# on shapes below).
ones = tf.ones([6], dtype=tf.int32)
# Add the two vectors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
# Create a session to run the default graph.
with tf.Session() as sess:
print(just_beyond_primes.eval())
with tf.Graph().as_default():
# A scalar (0-D tensor).
scalar = tf.zeros([])
# A vector with 3 elements.
vector = tf.zeros([3])
# A matrix with 2 rows and 3 columns.
matrix = tf.zeros([2, 3])
with tf.Session() as sess:
print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.eval())
print('vector has shape', vector.get_shape(), 'and value:\n', vector.eval())
print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.eval())
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create a constant scalar with value 1.
ones = tf.constant(1, dtype=tf.int32)
# Add the two tensors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
with tf.Session() as sess:
print(just_beyond_primes.eval())
with tf.Graph().as_default():
# Create a matrix (2-d tensor) with 3 rows and 4 columns.
x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]],
dtype=tf.int32)
# Create a matrix with 4 rows and 2 columns.
y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32)
# Multiply `x` by `y`.
# The resulting matrix will have 3 rows and 2 columns.
matrix_multiply_result = tf.matmul(x, y)
with tf.Session() as sess:
print(matrix_multiply_result.eval())
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 2x8 matrix.
reshaped_2x8_matrix = tf.reshape(matrix, [2,8])
# Reshape the 8x2 matrix into a 4x4 matrix
reshaped_4x4_matrix = tf.reshape(matrix, [4,4])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped matrix (2x8):")
print(reshaped_2x8_matrix.eval())
print("Reshaped matrix (4x4):")
print(reshaped_4x4_matrix.eval())
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 3-D 2x2x4 tensor.
reshaped_2x2x4_tensor = tf.reshape(matrix, [2,2,4])
# Reshape the 8x2 matrix into a 1-D 16-element tensor.
one_dimensional_vector = tf.reshape(matrix, [16])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped 3-D tensor (2x2x4):")
print(reshaped_2x2x4_tensor.eval())
print("1-D vector:")
print(one_dimensional_vector.eval())
# Write your code for Task 1 here.
with tf.Graph().as_default(), tf.Session() as sess:
# Task: Reshape two tensors in order to multiply them
# Here are the original operands, which are incompatible
# for matrix multiplication:
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
# We need to reshape at least one of these operands so that
# the number of columns in the first operand equals the number
# of rows in the second operand.
# Reshape vector "a" into a 2-D 2x3 matrix:
reshaped_a = tf.reshape(a, [2,3])
# Reshape vector "b" into a 2-D 3x1 matrix:
reshaped_b = tf.reshape(b, [3,1])
# The number of columns in the first matrix now equals
# the number of rows in the second matrix. Therefore, you
# can matrix mutiply the two operands.
c = tf.matmul(reshaped_a, reshaped_b)
print(c.eval())
# An alternate approach: [6,1] x [1, 3] -> [6,3]
g = tf.Graph()
with g.as_default():
# Create a variable with the initial value 3.
v = tf.Variable([3])
# Create a variable of shape [1], with a random initial value,
# sampled from a normal distribution with mean 1 and standard deviation 0.35.
w = tf.Variable(tf.random_normal([1], mean=1.0, stddev=0.35))
with g.as_default():
with tf.Session() as sess:
try:
v.eval()
except tf.errors.FailedPreconditionError as e:
print("Caught expected error: ", e)
with g.as_default():
with tf.Session() as sess:
initialization = tf.global_variables_initializer()
sess.run(initialization)
# Now, variables can be accessed normally, and have values assigned to them.
print(v.eval())
print(w.eval())
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# These three prints will print the same value.
print(w.eval())
print(w.eval())
print(w.eval())
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# This should print the variable's initial value.
print(v.eval())
assignment = tf.assign(v, [7])
# The variable has not been changed yet!
print(v.eval())
# Execute the assignment op.
sess.run(assignment)
# Now the variable is updated.
print(v.eval())
# Write your code for Task 2 here.
with tf.Graph().as_default(), tf.Session() as sess:
# Task 2: Simulate 10 throws of two dice. Store the results
# in a 10x3 matrix.
# We're going to place dice throws inside two separate
# 10x1 matrices. We could have placed dice throws inside
# a single 10x2 matrix, but adding different columns of
# the same matrix is tricky. We also could have placed
# dice throws inside two 1-D tensors (vectors); doing so
# would require transposing the result.
dice1 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
dice2 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
# We may add dice1 and dice2 since they share the same shape
# and size.
dice_sum = tf.add(dice1, dice2)
# We've got three separate 10x1 matrices. To produce a single
# 10x3 matrix, we'll concatenate them along dimension 1.
resulting_matrix = tf.concat(
values=[dice1, dice2, dice_sum], axis=1)
# The variables haven't been initialized within the graph yet,
# so let's remedy that.
sess.run(tf.global_variables_initializer())
print(resulting_matrix.eval())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <img src="http
Step2: <img src="http
Step3: <img src="http
|
<ASSISTANT_TASK:>
Python Code:
import calendar
import datetime
import numpy
import os.path
import pickle
from random import randrange, random, shuffle
import sys
import time
import math
import nupic
from nupic.encoders import ScalarEncoder, MultiEncoder
from nupic.bindings.algorithms import SpatialPooler as SP
from nupic.research.TP10X2 import TP10X2 as TP
C = [1, 1, 1, 0, 0, 0, 0, 0, 0]
B = [0, 0, 0, 1, 1, 1, 0, 0, 0]
A = [0, 0, 0, 0, 0, 0, 1, 1, 1]
n = 10
w = 3
#inputs = [[0] * (i*w) + [1] *w + [0] * ((n - i - 1) * w) for i in range (0, n)]
enc = ScalarEncoder(w=5, minval=0, maxval=10, radius=1.25, periodic=True, name="encoder", forced=True)
for d in range(0, 10):
print str(enc.encode(d))
inputs = [enc.encode(i) for i in range(10)]
tp = TP(numberOfCols=40, cellsPerColumn=7.9,
initialPerm=0.5, connectedPerm=0.5,
minThreshold=10, newSynapseCount=10,
permanenceInc=0.1, permanenceDec=0.01,
activationThreshold=1,
globalDecay=0, burnIn=1,
checkSynapseConsistency=False,
pamLength=7)
input_array = numpy.zeros(40, dtype="int32")
tp.reset()
for i, pattern in enumerate(inputs*1):
input_array[:] = pattern
tp.compute(input_array, enableLearn=True, computeInfOutput=True)
tp.printStates()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Working with Data
Step2: Normalizing data
Step3: We can use standard deviation to normalize data.
Step4: We are now going to normalize the data so we give both data items the same weight.
Step5: with numpy array we can do simple vectorized operations
|
<ASSISTANT_TASK:>
Python Code:
# Series
import numpy as np
import pandas as pd
myArray = np.array([2,3,4])
row_names = ['p','q','r']
mySeries = pd.Series(myArray,index=row_names)
print (mySeries)
print (mySeries[0])
print (mySeries['p'])
# Dataframes
myArray = np.array([[2,3,4],[5,6,7]])
row_names = ['p','q']
col_names = ['One','Two','Three']
myDataFrame = pd.DataFrame(myArray,index = row_names,columns = col_names)
print (myDataFrame)
print ('Method 1 :')
print ('One column = \n{}'.format(myDataFrame['One']))
print ('Method 2 :')
print ('One column = \n{}'.format(myDataFrame.One))
# Let's load data from a csv
df = pd.read_csv("../data/diabetes.csv")
df.info()
# Examine data
df.head()
%matplotlib inline
import matplotlib.pyplot as plt
# Histogram
bins=range(0,100,10)
plt.hist(df["Age"].values, bins, alpha=0.5, label='age')
plt.show()
plt.hist(df["BMI"].values, bins, alpha=0.5, label='BMI')
plt.show()
plt.hist(df["Age"].values, bins, alpha=0.5, label='age')
plt.hist(df["BMI"].values, bins, alpha=0.5, label='BMI')
plt.show()
from numpy.random import normal
gaussian_numbers = normal(size=5000)
plt.hist(gaussian_numbers, bins=np.linspace(-5.0, 5.0, num=20)) # Set bin bounds
plt.show()
# Let's start with an example on the AGE feature
# I create a new array for easier manipulation
arr_age = df["Age"].values
arr_age[:10]
mean_age = np.mean(arr_age)
std_age = np.std(arr_age)
print ('Age Mean: {} Std:{}'.format(mean_age, std_age))
# So to compute the standardized array, I write :
arr_age_new = (arr_age - mean_age)/std_age
arr_age_new[:10]
# I can now apply the same idea to a pandas dataframe
# using some built in pandas functions :
df_new = (df - df.mean()) / df.std()
df_new.head()
df.head()
# Histogram
bins=np.linspace(-5.0, 5.0, num=20)
plt.hist(df_new["Age"].values, bins, alpha=0.5, label='age')
plt.hist(df_new["BMI"].values, bins, alpha=0.5, label='BMI')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The attributes in Client are name, balance and level.
Step2: We can see the attributes of John_Doe, or Jane_Defoe by calling them
Step3: We can also add, remove or modify attributes as we like
Step4: You can also use the following instead instead of the normal statements
Step5: Methods
Step6: What is "self"?
Step7: Static Methods
Step8: Class Methods
Step9: Key Concept
|
<ASSISTANT_TASK:>
Python Code:
# create the Client class below
class Client(object):
def __init__(self, name, balance):
self.name = name
self.balance = balance + 100
#define account level
if self.balance < 5000:
self.level = "Basic"
elif self.balance < 15000:
self.level = "Intermediate"
else:
self.level = "Advanced"
John_Doe = Client("John Doe", 500)
Jane_Defoe = Client("Jane Defoe", 150000)
John_Doe.name
Jane_Defoe.level
Jane_Defoe.balance
John_Doe.email = "jdoe23@gmail.com"
John_Doe.email = "johndoe23@gmail.com"
del John_Doe.email
getattr(John_Doe, 'name')
setattr(John_Doe, 'email', 'jdoe23@gmail.com')
John_Doe.email
Client.bank = "TD"
Client.location = "Toronto, ON"
# try calling these attributes at the class and instance level
Client.bank
Jane_Defoe.bank
# Use the Client class code above to now add methods for withdrawal and depositing of money
# create the Client class below
class Client(object):
def __init__(self, name, balance):
self.name = name
self.balance = balance + 100
#define account level
if self.balance < 5000:
self.level = "Basic"
elif self.balance < 15000:
self.level = "Intermediate"
else:
self.level = "Advanced"
def deposit(self, amount):
self.balance += amount
return self.balance
def withdraw(self, amount):
if amount > self.balance:
raise RuntimeError("Insufficient for withdrawal")
else:
self.balance -= amount
return self.balance
Jane_Defoe.deposit(150000)
# Try calling a method two different ways
John_Doe.deposit(500)
Client.withdraw(Jane_Defoe, 50000)
# Add a static method called make_money_sound()
# create the Client class below
class Client(object):
def __init__(self, name, balance):
self.name = name
self.balance = balance + 100
#define account level
if self.balance < 5000:
self.level = "Basic"
elif self.balance < 15000:
self.level = "Intermediate"
else:
self.level = "Advanced"
@staticmethod
def make_money_sound():
print "Cha-ching!"
Client.make_money_sound()
# Add a class method called bank_location()
# create the Client class below
class Client(object):
bank = "TD"
location = "Toronto, ON"
def __init__(self, name, balance):
self.name = name
self.balance = balance + 100
#define account level
if self.balance < 5000:
self.level = "Basic"
elif self.balance < 15000:
self.level = "Intermediate"
else:
self.level = "Advanced"
@classmethod
def bank_location(cls):
return str(cls.bank + " " + cls.location)
Client.bank_location()
# create the Savings class below
class Savings(Client):
interest_rate = 0.005
def update_balance(self):
self.balance += self.balance*self.interest_rate
return self.balance
# create an instance the same way as a Client but this time by calling Savings instead
Lina_Tran = Savings("Lina Tran", 50)
# it now has access to the new attributes and methods in Savings...
print Lina_Tran.name
print Lina_Tran.balance
print Lina_Tran.interest_rate
# ...as well as access to attributes and methods from the Client class as well
Lina_Tran.update_balance()
#defining a method outside the class definition
def check_balance(self):
return self.balance
Client.check_balance = check_balance
John_Doe.check_balance()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As a quick refresher, Python sets are unique, unordered collections of objects. You can check if an item is in a set with the in keyword
Step2: You can add new things to the set using the .add() method, for example
Step3: So, our beliefs will be represented using sets, and we will represent the rules of our production system using a list of 2-tuples. Each tuple contains a condition and a result. The rule can be applied if the condition is present in the set of beliefs; if the rule is applied, then the result should be added to the set of beliefs. For example, the following belief again consists of 'a' and 'b', and the rules state that "if a, then b" and that "if c, then d"
Step4: To help visualize the rules a bit better, here is a function called print_rules that will print out each rule on a separate line
Step6: <div class="alert alert-success">Using the representation of beliefs and rules specified above, write a function called <code>match</code> that searches through a given list of rules to find a rule that is triggered by the given set of beliefs.</div>
Step7: Check that your function behaves as expected, based on the example given above
Step12: Part B (1.5 points)
Step13: Part C (0.5 points)
Step14: If you now observed switch is flipped but nothing else, what would the above production system lead you to conclude? Use your forward_chain function to find out
Step15: <div class="alert alert-success">
Step16: In this new production system, if you knew only that the grass is wet what would you conclude? Again, you can use your forward_chain function to find out
Step17: <div class="alert alert-success">
|
<ASSISTANT_TASK:>
Python Code:
example_belief = {'a', 'b'}
'a' in example_belief
'c' in example_belief
example_belief.add('c')
example_belief
example_belief = {'a', 'b'}
example_rules = [('a', 'b'), ('c', 'd')]
def print_rules(rules):
for rule in rules:
print(str(rule[0]) + " --> " + str(rule[1]))
print_rules(example_rules)
def match(belief, rules):
Determine whether a rule is triggered by the given set of beliefs.
The *first* rule in the list of rules that is triggered should be
returned (and you should only ever return one rule, even if multiple are
triggered). If no rule is triggered, None should be returned.
A rule should only be triggered if it adds something new to the set of
beliefs: for example, if the beliefs are `{'a', 'b'}`, and there is only
one rule, `('a', 'b')`, then it should not be triggered because it
doesn't add anything new. If the beliefs were just `{'a'}`, however, then
the rule *should* be triggered because it would add `b` to the set of
beliefs.
Hint: you should be able to do this in four lines of code (or less),
including the return statement.
Parameters
----------
belief : set
A set of true propositions.
rules : list of tuples
A list of tuples, such that for each tuple, the first element implies
the second (but not vice versa).
Returns
-------
The first rule (tuple) that was triggered, or None if no rules were triggered.
# YOUR CODE HERE
raise NotImplementedError()
print(match({'a'}, [('a', 'b')])) # should print ('a', 'b')
print(match({'a', 'b'}, [('a', 'b')])) # should print None
# add your own test cases here!
from nose.tools import assert_equal
# check that a --> b is triggered
assert_equal(match({'a'}, [('a', 'b')]), ('a', 'b'))
# check that nothing is triggered, because 'b' does not trigger any rules
assert_equal(match({'b'}, [('a', 'b')]), None)
# check that nothing is triggered, because 'a' and 'b' are already in the belief
assert_equal(match({'a', 'b'}, [('a', 'b')]), None)
# check that a --> b is triggered
assert_equal(match({'a'}, [('a', 'b'), ('b', 'c')]), ('a', 'b'))
# check that b --> c is triggered
assert_equal(match({'a', 'b'}, [('a', 'b'), ('b', 'c')]), ('b', 'c'))
# check that nothing is triggered, because 'a', 'b', and 'c' are already in the belief
assert_equal(match({'a', 'b', 'c'}, [('a', 'b'), ('b', 'c')]), None)
print("Success!")
def forward_chain(belief, rules):
Fully execute a set of given rules that match a given belief, until
no more new rules are triggered. That is, this function should:
(i) Scan through the rules until it finds rule(s) which are applicable,
(ii) trigger such rules and update beliefs,
(iii) repeat (i) and (ii) until no further rules can be triggered.
Returns a new set of beliefs (without changing the original set of beliefs)
based on which rules were triggered.
Note: this function should employ a `while` loop and should call the `match`
function you implemented in the first part of this problem.
Hint: you should be able to do this in 8 lines of code, including the
return statement.
Parameters
----------
belief : set
A set of true propositions.
rules : list of tuples
A list of tuples, such that for each tuple, the first element implies
the second (but not vice versa).
Returns
-------
tuple of (new_belief, triggered_rules):
new_belief is an updated set of true propositions, and triggered_rules
is the list of rules that were triggered, in order.
# YOUR CODE HERE
raise NotImplementedError()
b, r = forward_chain({'a'}, [('a', 'b'), ('b', 'c')])
print_rules(r) # should print both 'a --> b' and 'b --> c'
b # should be {'a', 'b', 'c'}
# add your own test cases here!
Check that `forward_chain` uses the `match` function.
from nose.tools import assert_raises
orig_match = match
del match
try:
assert_raises(NameError, forward_chain, {'a'}, [('a', 'b')])
finally:
match = orig_match
print("Success!")
Ensure that the new belief is a different object from the original belief.
b1 = {'a'}
b2 = forward_chain(b1, [('a', 'b')])
assert_equal(b1, {'a'})
assert_equal(b2, ({'a', 'b'}, [('a', 'b')]))
print("Success!")
Check that full forward chaining works.
b, r = forward_chain({'a'}, [('a', 'b'), ('b', 'c'), ('c', 'd')])
assert_equal(b, {'a', 'b', 'c', 'd'})
assert_equal(r, [('a', 'b'), ('b', 'c'), ('c', 'd')])
b, r = forward_chain({'a'}, [('b', 'c'), ('c', 'd'), ('a', 'b')])
assert_equal(b, {'a', 'b', 'c', 'd'})
assert_equal(r, [('a', 'b'), ('b', 'c'), ('c', 'd')])
b, r = forward_chain({'a'}, [('a', 'c'), ('a', 'b')])
assert_equal(b, {'a', 'b', 'c'})
assert_equal(r, [('a', 'c'), ('a', 'b')])
b, r = forward_chain({'b'}, [('a', 'b'), ('b', 'c')])
assert_equal(b, {'b', 'c'})
assert_equal(r, [('b', 'c')])
b, r = forward_chain({'a', 'b', 'c'}, [('b', 'c'), ('b', 'a'), ('a', 'b')])
assert_equal(b, {'a', 'b', 'c'})
assert_equal(r, [])
b, r = forward_chain(set(), [('b', 'c'), ('b', 'a'), ('a', 'b')])
assert_equal(b, set())
assert_equal(r, [])
print("Success!")
rules_1 = [
('switch is flipped', 'sprinkler was on'),
('sprinkler was on', 'grass is wet'),
('it rained', 'car is wet'),
('it rained', 'path is slippery'),
('it rained', 'grass is wet')
]
# perform forward chaining on the belief that 'switch is flipped'
belief_1, triggered_rules_1 = forward_chain({'switch is flipped'}, rules_1)
# print which rules were triggered
print_rules(triggered_rules_1)
# show the final belief
belief_1
rules_2 = [
('sprinkler was on', 'switch is flipped'),
('car is wet', 'it rained'),
('path is slippery', 'it rained'),
('grass is wet', 'it rained')
]
# perform forward chaining on the belief that 'grass is wet'
belief_2, triggered_rules_2 = forward_chain({'grass is wet'}, rules_2)
# print which rules were triggered
print_rules(triggered_rules_2)
# show the final belief
belief_2
# perform forward chaining on the belief that 'sprinkler was on'
belief_3, triggered_rules_3 = forward_chain({'sprinkler was on'}, rules_1 + rules_2)
# print which rules were triggered
print_rules(triggered_rules_3)
# show the final belief
belief_3
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Seems that we are mostly steering straight here.
Step 2
Step2: By trial and error, I ended up picking index 4341 where the image matches the left turn nicely.
Step3: Let's do the same with the hard right turn (steering angle > 0.5).
Step4: Again, after some peeking of the images, the index 3357 looks fine.
Step5: Now I need to pick an record for driving straight. There should be plenty of choices to pick from, so some random exploration of the choices is probably the best way to find one that looks ok.
Step6: Step 3
Step7: Lets check the dimensions of the network layers.
Step8: Now I need to massage the images and corresponding steering angles to form that is usable in model training.
Step9: Training hits zero validation loss after epoch 5, i.e., it should have learned the data perfectly. Lets see how well the model predicts.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
df = pd.read_csv('data/driving_log.csv')
print(df.describe())
df['steering'].hist(bins=100)
plt.title('Histogram of steering angle (100 bins)')
df[df['steering'] < -0.5].index
import os
from PIL import Image
def get_record_and_image(index):
record = df.iloc[index]
path = os.path.join('data', record.center)
return record, Image.open(path)
left_record, left_image = get_record_and_image(4341)
print('Steering angle {}'.format(left_record.steering))
plt.imshow(left_image)
df[df['steering'] > 0.5].index
right_record, right_image = get_record_and_image(3357)
print('Steering angle {}'.format(right_record.steering))
plt.imshow(right_image)
# I used this code to pick random images until I found one I liked
#index = df[(df['steering'] > -0.1) & (df['steering'] < 0.1)].sample(n=1).iloc[0].name
#print('Index', index)
straight_record, straight_image = get_record_and_image(796)
plt.imshow(straight_image)
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
model = Sequential()
model.add(Convolution2D(6, 5, 5, border_mode='valid', subsample=(5, 5), input_shape=(160, 320, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(16, 5, 5, border_mode='valid', subsample=(2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(120))
model.add(Activation('relu'))
model.add(Dense(84))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('tanh'))
for n, layer in enumerate(model.layers, 1):
print('Layer {:2} {:16} input shape {} output shape {}'.format(n, layer.name, layer.input_shape, layer.output_shape))
X_train = [np.array(image) for image in [left_image, right_image, straight_image]]
X_min = np.min(X_train)
X_max = np.max(X_train)
X_normalized = (X_train - X_min) / (X_max - X_min) - 0.5
y_train = np.array([record['steering'] for record in [left_record, right_record, straight_record]])
from random import randrange
def generator():
while 1:
i = randrange(3)
# Create a one item batch by taking a slice
yield X_normalized[i:i+1], y_train[i:i+1]
model.compile('adam', 'mse')
history = model.fit_generator(generator(), samples_per_epoch=1000, validation_data=(X_normalized, y_train), nb_epoch=10, verbose=2)
for X, y in zip(X_expand, y_train):
print('Actual steering angle {} model prediction {}'.format(y, model.predict(X)[0][0]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating dataset
Step2: Learning a linear regression model on the entire data
Step3: Visualising the fit
Step4: Creating the initial train set, the test set and the pool
Step5: Creating a committee each learnt on different subset of the data
Step6: Plotting the fit of the committee on the entire dataset
Step7: Evaluate the performance on the test set
Step8: The MAE on the test set is
Step9: Active learning procedure
Step10: As expected, the error goes down as we increase the number of points queried
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
np.random.seed(0)
%matplotlib inline
X = np.arange(1, 1001, 1)
Y = 10*X + 4 + 400* np.random.randn(1000, )
plt.scatter(X, Y, s=0.1)
plt.xlabel("X")
plt.ylabel("Y")
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
clf.fit(X.reshape(-1,1), Y)
clf.intercept_
clf.coef_
plt.scatter(X, Y, s=0.1)
plt.xlabel("X")
plt.ylabel("Y")
plt.plot(X, clf.coef_[0]*X + clf.intercept_, color='k', label='Best fit on all data')
plt.legend()
plt.text(500, clf.coef_[0]*500 + clf.intercept_ +4000, "Y = {0:0.2f} X + {1:0.2f}".format(clf.coef_[0], clf.intercept_) )
from sklearn.model_selection import train_test_split
train_pool_X, test_X, train_pool_Y, test_Y = train_test_split(X, Y, test_size = 0.5)
train_X, pool_X, train_Y, pool_Y = train_test_split(train_pool_X, train_pool_Y, test_size=495)
plt.scatter(train_X, train_Y)
committee_size = 5
train_X_com = {0:{}}
train_Y_com = {0:{}}
models_com = {0:{}}
iteration = 0
for cur_committee in range(committee_size):
train_X_com[iteration][cur_committee], _, train_Y_com[iteration][cur_committee], _ = train_test_split(train_X, train_Y, train_size=0.5,
random_state=cur_committee)
models_com[iteration][cur_committee] = LinearRegression()
models_com[iteration][cur_committee].fit(train_X_com[iteration][cur_committee].reshape(-1,1), train_Y_com[iteration][cur_committee])
plt.scatter(X, Y, s=0.2)
for cur_committee in range(committee_size):
plt.plot(X, models_com[0][cur_committee].coef_[0]*X + models_com[0][cur_committee].intercept_,
label='Model {0}\nY = {1:0.2f} X + {2:0.2f}'.format(cur_committee,
models_com[0][cur_committee].coef_[0],
models_com[0][cur_committee].intercept_))
plt.legend()
estimations_com = {0:{}}
for cur_committee in range(committee_size):
estimations_com[0][cur_committee] = models_com[0][cur_committee].predict(test_X.reshape(-1, 1))
test_mae_error = {0:(pd.DataFrame(estimations_com[0]).mean(axis=1) - test_Y).abs().mean()}
test_mae_error[0]
num_iterations = 20
points_added_x=[]
points_added_y=[]
print("Iteration, Cost\n")
print("-"*40)
for iteration in range(1, num_iterations):
# For each committee: making predictions on the pool set based on model learnt in the respective train set
estimations_pool = {cur_committee: models_com[iteration-1][cur_committee].predict(pool_X.reshape(-1, 1)) for cur_committee in range(committee_size)}
# Finding points from the pool with highest disagreement among the committee - highest standard deviation
in_var = pd.DataFrame(estimations_pool).std(axis=1).argmax()
to_add_x = pool_X[in_var]
to_add_y = pool_Y[in_var]
points_added_x.append(to_add_x)
points_added_y.append(to_add_y)
# For each committee - Adding the point where the committe most disagrees
for com in range(committee_size):
if iteration not in train_X_com:
train_X_com[iteration] = {}
train_Y_com[iteration] = {}
models_com[iteration] = {}
train_X_com[iteration][com] = np.append(train_X_com[iteration-1][com], to_add_x)
train_Y_com[iteration][com] = np.append(train_Y_com[iteration-1][com], to_add_y)
# Deleting the point from the pool
pool_X = np.delete(pool_X, in_var)
pool_Y = np.delete(pool_Y, in_var)
# Training on the new set for each committee
for cur_committee in range(committee_size):
models_com[iteration][cur_committee] = LinearRegression()
models_com[iteration][cur_committee].fit(train_X_com[iteration][cur_committee].reshape(-1,1), train_Y_com[iteration][cur_committee])
estimations_com[iteration] = {}
for cur_committee in range(committee_size):
estimations_com[iteration][cur_committee] = models_com[iteration][cur_committee].predict(test_X.reshape(-1, 1))
test_mae_error[iteration]=(pd.DataFrame(estimations_com[iteration]).mean(axis=1) - test_Y).abs().mean()
print(iteration, (test_mae_error[iteration]))
pd.Series(test_mae_error).plot(style='ko-')
plt.xlim((-0.5, num_iterations+0.5))
plt.ylabel("MAE on test set")
plt.xlabel("# Points Queried")
fig, ax = plt.subplots()
import os
from matplotlib.animation import FuncAnimation
plt.rcParams['animation.ffmpeg_path'] = os.path.expanduser('/Users/nipun/anaconda3/bin/ffmpeg')
def update(iteration):
ax.cla()
ax.scatter(X, Y, s=0.2)
ax.set_title("Iteration: {} \n MAE = {:0.2f}".format(iteration, test_mae_error[iteration]))
for cur_committee in range(committee_size):
ax.plot(X, models_com[iteration][cur_committee].coef_[0]*X + models_com[iteration][cur_committee].intercept_,
label='Model {0}\nY = {1:0.2f} X + {2:0.2f}'.format(cur_committee,
models_com[iteration][cur_committee].coef_[0],
models_com[iteration][cur_committee].intercept_))
ax.scatter(points_added_x[iteration], points_added_y[iteration],s=100, color='red')
ax.legend()
fig.tight_layout()
anim = FuncAnimation(fig, update, frames=np.arange(0, num_iterations-1, 1), interval=1000)
plt.close()
from IPython.display import HTML
HTML(anim.to_html5_video())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Great! Now since we have a generator function we don't have to keep track of every single cube we created.
Step2: What is this was a normal function, what would it look like?
Step3: Notice that if we call some huge value of n (like 100000) the second function will have to keep track of every single result, when in our case we actually only care about the previous result to generate the next one!
Step4: After yielding all the values next() caused a StopIteration error. What this error informs us of is that all the values have been yielded.
Step5: But that doesn't mean the string itself is an iterator! We can check this with the next() function
Step6: Interesting, this means that a string object supports iteration, but we can not directly iterate over it as we could with a generator function. The iter() function allows us to do just that!
|
<ASSISTANT_TASK:>
Python Code:
# Generator function for the cube of numbers (power of 3)
def gencubes(n):
for num in range(n):
yield num**3
for x in gencubes(10):
print x
def genfibon(n):
'''
Generate a fibonnaci sequence up to n
'''
a = 1
b = 1
for i in range(n):
yield a
a,b = b,a+b
for num in genfibon(10):
print num
def fibon(n):
a = 1
b = 1
output = []
for i in range(n):
output.append(a)
a,b = b,a+b
return output
fibon(10)
def simple_gen():
for x in range(3):
yield x
# Assign simple_gen
g = simple_gen()
print next(g)
print next(g)
print next(g)
print next(g)
s = 'hello'
#Iterate over string
for let in s:
print let
next(s)
s_iter = iter(s)
next(s_iter)
next(s_iter)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'Date': ['2020-02-15 15:30:00', '2020-02-16 15:31:00', '2020-02-17 15:32:00', '2020-02-18 15:33:00', '2020-02-19 15:34:00'],
'Open': [2898.75, 2899.25, 2898.5, 2898.25, 2898.5],
'High': [2899.25, 2899.75, 2899, 2899.25, 2899.5],
'Low': [2896.5, 2897.75, 2896.5, 2897.75, 2898.25],
'Last': [2899.25, 2898.5, 2898, 2898, 2898.75],
'Volume': [1636, 630, 1806, 818, 818],
'# of Trades': [862, 328, 562, 273, 273],
'OHLC Avg': [2898.44, 2898.81, 2898, 2898.31, 2898.62],
'HLC Avg': [2898.33, 2898.67, 2897.75, 2898.33, 2898.75],
'HL Avg': [2897.88, 2898.75, 2897.75, 2898.5, 2898.75],
'Delta': [-146, 168, -162, -100, -100],
'HiLodiff': [11, 8, 10, 6, 6],
'OCdiff': [-2, 3, 2, 1, 1],
'div_Bar_Delta': [1, 2, -1, -1, -1]})
df['Date'] = pd.to_datetime(df['Date'])
df.set_index('Date', inplace=True)
def g(df):
to_delete = ['2020-02-17', '2020-02-18']
return df[~(df.index.strftime('%Y-%m-%d').isin(to_delete))]
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'canesm5', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Computing for Mathematics - Mock individual coursework
Step5: b. \((2 ^ {\frac{1}{2}} + 2) ^ 2 - 2 ^ {\frac{5}{2}}\)
Step8: \((\frac{1}{8}) ^ {\frac{4}{3}}\)
Step11: Question 2
Step12: Question 3
Step13: b. Create a variable D_inv with value the inverse of \(D\).
Step14: c. Using D_inv output the solution of the following system of equations
Step18: Question 4
Step20: b. Using 1,000,000 samples create a variable probability_of_catch which has value an estimate for the probability of the frisbee being caught.
Step22: c. Using the above, create a variable probability_of_forehand_given_drop which has value an estimate for the probability of the frisbee being thrown with a forehand given that it was not caught.
|
<ASSISTANT_TASK:>
Python Code:
### BEGIN SOLUTION
import sympy as sym
a, b, c = sym.Symbol("a"), sym.Symbol("b"), sym.Symbol("c")
sym.expand((9 * a ** 2 * b * c ** 4) ** (sym.S(1) / 2) / (6 * a * b ** (sym.S(3) / 2) * c))
### END SOLUTION
q1_a_answer = _
feedback_text = Your output is not a symbolic expression.
You are expected to use sympy for this question.
try:
assert q1_a_answer.expand(), feedback_text
except AttributeError:
assert False, feedback_text
import sympy as sym
a, b, c = sym.Symbol("a"), sym.Symbol("b"), sym.Symbol("c")
expected_answer = (9 * a ** 2 * b * c ** 4) ** (sym.S(1) / 2) / (6 * a * b ** (sym.S(3) / 2) * c)
feedback_text = fYour answer is not correct.
The expected answer is {expected_answer}.
assert sym.simplify(q1_a_answer - expected_answer) == 0, feedback_text
### BEGIN SOLUTION
(sym.S(2) ** (sym.S(1) / 2) + 2) ** 2 - 2 ** (sym.S(5) / 2)
### END SOLUTION
q1_b_answer = _
feedback_text = Your output is not a symbolic expression.
You are expected to use sympy for this question.
try:
assert q1_b_answer.expand(), feedback_text
except AttributeError:
assert False, feedback_text
x = sym.Symbol("x")
expected_answer = 6
feedback_text = fYour answer is not correct.
The expected answer is {expected_answer}.
assert sym.expand(q1_b_answer - expected_answer) == 0, feedback_text
### BEGIN SOLUTION
(sym.S(1) / 8) ** (sym.S(4) / 3)
### END SOLUTION
q1_c_answer = _
feedback_text = Your output is not a symbolic expression.
You are expected to use sympy for this question.
try:
assert q1_c_answer.expand(), feedback_text
except AttributeError:
assert False, feedback_text
x = sym.Symbol("x")
expected_answer = sym.S(1) / 16
feedback_text = fYour answer is not correct.
The expected answer is {expected_answer}.
assert q1_c_answer == expected_answer, feedback_text
def expand(expression):
### BEGIN SOLUTION
Take a symbolic expression and expands it.
return sym.expand(expression)
### END SOLUTION
feedback_text = You did not include a docstring. This is important to help document your code.
It is done using triple quotation marks. For example:
def get_remainder(m, n):
\"\"\"
This function returns the remainder of m when dividing by n
\"\"\"
…
Using that it's possible to access the docstring,
one way to do this is to type: `get_remainder?`
(which only works in Jupyter) or help(get_remainder).
We can also comment code using `#` but this is completely
ignored by Python so cannot be accessed in the same way.
try:
assert expand.__doc__ is not None, feedback_text
except NameError:
assert False, "You did not create a function called `expand`"
expression = x * (x + 1)
assert expand(expression) == x ** 2 + x, f"Your function failed for {expression}"
expression = x * (x + 1) - x ** 2
assert expand(expression) == x, f"Your function failed for {expression}"
expression = x ** 2 + 1
assert expand(expression) == x ** 2 + 1, f"Your function failed for {expression}"
### BEGIN SOLUTION
a = sym.Symbol("a")
D = sym.Matrix([[1, 2, a], [3, 1, 0], [1, 1, 1]])
### END SOLUTION
expected_D = sym.Matrix([[1, 2, a], [3, 1, 0], [1, 1, 1]])
feedback_text = f"The expected value of `D` is {expected_D}."
try:
assert sym.simplify(sym.expand(sym.simplify(D) - expected_D)) == sym.Matrix([[0, 0, 0] for _ in range(3)]), feedback_text
except NameError:
assert False, "You did not create a variable `D`"
### BEGIN SOLUTION
D_inv = D.inv()
### END SOLUTION
expected_D_inv = expected_D.inv()
feedback_text = f"The expected value of `D_inv` is {expected_D_inv}."
assert sym.simplify(sym.expand(sym.simplify(D_inv) - expected_D_inv)) == sym.Matrix([[0, 0, 0] for _ in range(3)]), feedback_text
### BEGIN SOLUTION
b = sym.Matrix([[3], [4], [1]])
sym.simplify(D.inv() @ b).subs({a: 4})
### END SOLUTION
answer_q3_c = _
expected_b = sym.Matrix([[3], [4], [1]])
expected_answer = sym.simplify(expected_D_inv @ expected_b).subs({a: 4})
feedback_text = f"The expected solution is {expected_answer}."
assert sym.expand(expected_answer - answer_q3_c) == sym.Matrix([[0], [0], [0]]), feedback_text
import random
def sample_experiment():
Returns the throw type and whether it was caught
### BEGIN SOLUTION
if random.random() < .25:
throw = "backhand"
probability_of_catch = .8
else:
throw = "forehand"
probability_of_catch = .9
caught = random.random() < probability_of_catch
### END SOLUTION
return throw, caught
feedback_text = You did not include a docstring. This is important to help document your code.
It is done using triple quotation marks. For example:
def get_remainder(m, n):
\"\"\"
This function returns the remainder of m when dividing by n
\"\"\"
…
Using that it's possible to access the docstring,
one way to do this is to type: `get_remainder?`
(which only works in Jupyter) or help(get_remainder).
We can also comment code using `#` but this is completely
ignored by Python so cannot be accessed in the same way.
try:
assert sample_experiment.__doc__ is not None, feedback_text
except NameError:
assert False, "You did not create a variable called `sample_experiment`"
try:
random.seed(0)
throw, caught = sample_experiment()
assert throw in ["forehand", "backhand"], "Your function did not give a throw with seed=0"
assert caught in [True, False], "Your function did not give a valid coin with seed=0"
random.seed(1)
throw, caught = sample_experiment()
assert throw in ["forehand", "backhand"], "Your function did not give a valid throw with seed=0"
assert caught in [True, False], "Your function did not give a valid coin with seed=0"
except NameError:
assert False, "You did not create a function called `sample_experiment` or there is an error in your function."
repetitions = 10_000
random.seed(0)
feedback_text = fYour function did not give a selection of forehand throws within acceptable error bounds.
Out of {repetitions} repetitions you got less than 5500 or more than 9500 forehand throw.
throws = [sample_experiment()[0] for _ in range(repetitions)]
assert 5_500 <= throws.count("forehand") <= 9_500, feedback_text
### BEGIN SOLUTION
number_of_repetitions = 1_000_000
random.seed(0)
samples = [sample_experiment() for repetition in range(number_of_repetitions)]
probability_of_catch = sum(catch is True for throw, catch in samples) / number_of_repetitions
### END SOLUTION
assert type(probability_of_catch) is float, "You did not return a float"
expected_answer = sym.S(1) / (4) * sym.S(8) / 10 + sym.S(3) / (4) * sym.S(9) / 10
feedback_text = fThe expected value is: {expected_answer}
Your value was not within 20% of the expected answer.
assert expected_answer * .8 <= probability_of_catch <= expected_answer * 1.2, feedback_text
### BEGIN SOLUTION
samples_with_drop = [(throw, catch) for throw, catch in samples if catch is False]
number_of_drops = len(samples_with_drop)
probability_of_forehand_given_drop = sum(throw == "forehand" for throw, catch in samples_with_drop) / number_of_drops
### END SOLUTION
assert type(probability_of_forehand_given_drop) is float, "You did not return a float"
expected_answer = (sym.S(1) / (10) * sym.S(75) / 100) / (1 - (sym.S(1) / (4) * sym.S(8) / 10 + sym.S(3) / (4) * sym.S(9) / 10))
feedback_text = fThe expected value is: {expected_answer}
Your value was not within 20% of the expected answer.
assert expected_answer * .8 <= probability_of_forehand_given_drop <= expected_answer * 1.2, feedback_text
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now the ChannelManager code
Step2: Note that this class is not actually a subclass of ChannelManager, but it makes use of it in two ways
Step3: Nice. It worked. Here are the steps that happened
Step4: Choosing named targets
Step5: With that in place, the main() processing of a script could capture the command-line arguments and end up calling a target like
Step6: Or...
|
<ASSISTANT_TASK:>
Python Code:
def expensive_deriver(num):
# 10 minutes pass...
return num * 100
# Our fake durable storage holding the first 8 derived elements
storage = {num: expensive_deriver(num) for num in range(8)}
# The ExtantArtifact accessing that data
class ExampleExtantArtifact(ExtantArtifact):
def __init__(self, num):
super(ExampleExtantArtifact, self).__init__(self.get_me, name='ExampleExtantArtifact')
self.num = num
@gen.coroutine
def get_me(self):
raise gen.Return(storage[self.num])
class GuideChannelManager(object):
def __init__(self):
# sets the 'channel_manager' property, which must exist on the class
self.channel_manager = mgmt.ChannelManager()
@mgmt.channelproperty
def extant(self):
print('Creating extant channel')
return IterChannel(KeyedArtifact(i, ExampleExtantArtifact(i)) for i in sorted(storage.keys()))
@mgmt.channelproperty
def possible(self):
print('Creating possible channel')
return IterChannel(KeyedArtifact(i, DerivedArtifact(expensive_deriver, i)) for i in range(10))
@mgmt.channelproperty
def all(self):
print('Creating all channel')
return merge_keyed_channels(self.possible, self.extant)
print_chans(GuideChannelManager().all, mode='get')
# recreate the storage to not mess up other parts of the notebook when run out of order
storage = {num: expensive_deriver(num) for num in range(8)}
targets = dict()
def configure_channels():
mgr = GuideChannelManager()
targets['possible'] = lambda: mgr.possible
targets['extant'] = lambda: mgr.extant
targets['all'] = lambda: mgr.all
targets['default'] = targets['all']
configure_channels()
def run_target(name='default'):
print_chans(targets[name](), mode='get')
run_target('extant')
# Calling again to act like a fresh running of the script
configure_channels()
run_target()
# recreate the storage to not mess up other parts of the notebook when run out of order
storage = {num: expensive_deriver(num) for num in range(8)}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Section 1
Step2: Section 2
Step3: Section 4
Step4: Section 5
Step5: The figure above shows thats the point estimates, especially for the mean, are quite noisy and range from just below 3 days, to just above 8 days.
Step6: Three articles show up of interest
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import os
import re
import seaborn as sns
from datetime import datetime as dt
from support_funs_incubation import stopifnot, uwords, idx_find, find_beside, ljoin, sentence_find, record_vals
!pip install ansicolors
# Takes a tuple (list(idx), sentence) and will print in red anything in the index
def color_printer(idx_sentence):
indices = idx_sentence[0]
sentence = idx_sentence[1]
mat = np.zeros([2 * len(indices) + 1, 2], dtype=int)
for ii, idx in enumerate(indices):
ri = 2 * ii + 1
mat[ri - 1, 1] = idx[0]
mat[ri, :] = idx
mat[ri + 1, 0] = idx[1]
if ii + 1 == len(indices):
mat[ri + 1, 1] = len(sentence)
output = ''
for ii in range(mat.shape[0]):
if ii % 2 == 0:
output = output + sentence[mat[ii, 0]:mat[ii, 1]]
else:
output = output + red(sentence[mat[ii, 0]:mat[ii, 1]])
output = output.replace('\n', '')
print(output)
from colors import red, black, white # ansicolors
dir_base = os.getcwd()
dir_data = os.path.join(dir_base,'..','input','incubation')
# load data
df = pd.read_csv(os.path.join(dir_data, 'df_txt.csv'))
df['date'] = pd.to_datetime(df.date)
print(df.shape)
# remove prefix from some abstracts: publically funded repositories.... etc
pref = 'COVID-19 resource centre remains active.'
for ii, aa in enumerate(df.abstract):
if isinstance(aa, float): # nan
continue
hit = re.search(pref, aa)
if hit:
df.abstract.iloc[ii] = aa[hit.span()[1] + 1:]
# Find ways in which covid and ncov are referred to
regex_ncov = r'(20)?19(\-)?ncov|ncov(\-)?(20)?19'
regex_covid = r'covid(\-)?(20)?19'
# row indices
idx_covid_abs = np.where(idx_find(df.abstract, regex_covid))[0]
idx_ncov_abs = np.where(idx_find(df.abstract, regex_ncov))[0]
idx_union_abs = np.union1d(idx_covid_abs, idx_ncov_abs)
di_regex = {'covid': regex_covid, 'ncov': regex_ncov}
di_idx = {'covid': idx_covid_abs, 'ncov': idx_ncov_abs}
print('%i possible "covid" articles (using abstract)\n'
'%i possible nCoV articles (using abstract)\n'
'Union: %i, interection: %i' %
(len(idx_covid_abs), len(idx_ncov_abs), len(idx_union_abs),
len(np.intersect1d(idx_covid_abs, idx_ncov_abs))))
dfmt = '%B %d, %Y'
date_ncov_min = df.date.iloc[idx_ncov_abs].min().strftime(dfmt)
date_ncov_max = df.date.iloc[idx_ncov_abs].max().strftime(dfmt)
date_covid_min = df.date.iloc[idx_covid_abs].min().strftime(dfmt)
date_covid_max = df.date.iloc[idx_covid_abs].max().strftime(dfmt)
print('First and last nCoV article: %s & %s\n'
'First and last covid-19 article: %s & %s' %
(date_ncov_min, date_ncov_max, date_covid_min, date_covid_max))
holder = []
for term in di_regex:
regex = di_regex[term]
idx = di_idx[term]
dat_abstract = uwords(df.abstract.iloc[idx], regex).assign(doc='abstract')
dat_txt = uwords(df.txt.iloc[idx], regex).assign(doc='txt')
dat = pd.concat([dat_abstract, dat_txt])
dat = dat.groupby('term').n.sum().reset_index()
dat.insert(0, 'tt', term)
holder.append(dat)
df_term = pd.concat(holder).reset_index(drop=True)
# Term usage
print(df_term)
pat_peds = r'infant|child|pediatric|age\<'
idx_incubation = []
idx_peds = []
for ii in idx_union_abs:
abs, txt = df.abstract[ii], df.txt[ii]
corpus = abs + '. ' + txt
if re.search(r'incubation', corpus, re.IGNORECASE) is not None:
idx_incubation.append(ii)
if re.search(pat_peds, corpus, re.IGNORECASE) is not None:
idx_peds.append(ii)
idx_incubation_peds = np.intersect1d(idx_incubation, idx_peds)
print('%i incubation articles, with %i pediatric articles, %i overlap' %
(len(idx_incubation), len(idx_peds), len(idx_incubation_peds)))
# What is the most common word to appear before/after incubation?
holder_l, holder_r = [], []
for ii in idx_incubation:
abs, txt = df.abstract[ii], df.txt[ii]
corpus = abs + '. ' + txt
rterm = find_beside(corpus, 'incubation', tt='right')
lterm = find_beside(corpus, 'incubation', tt='left')
holder_r.append(rterm)
holder_l.append(lterm)
dat_suffix = pd.Series(ljoin(holder_r)).str.lower().value_counts().reset_index().rename(
columns={0: 'n', 'index': 'suffix'})
dat_prefix = pd.Series(ljoin(holder_l)).str.lower().value_counts().reset_index().rename(
columns={0: 'n', 'index': 'suffix'})
print(dat_suffix.head(50))
print(dat_prefix.head(50))
suffix = ['period', 'time', 'distribution', 'duration', 'interval', 'rate', 'mean', 'median', 'estimation']
suffix = [z + r'(s)?' for z in suffix]
pat_incubation = [r'incubation\s'+z for z in suffix]
do_run = False
if do_run:
keepers = []
for jj, ii in enumerate(idx_incubation):
abs, txt = df.abstract[ii], df.txt[ii]
corpus = abs + '. ' + txt
idx_sentences = sentence_find(corpus, pat_incubation)
if len(idx_sentences) > 0:
try:
dd = df.loc[ii,'date'].strftime('%B %d, %Y')
except:
dd = 'NaN'
print('---- Title: %s, date: %s, index: %i (%i of %i) ----' %
(df.loc[ii, 'title'], dd , ii,jj+1,len(idx_incubation)))
tmp = record_vals(idx_sentences)
dat = pd.DataFrame(tmp,columns=['pos','txt']).assign(idx = ii)
keepers.append(dat)
dat_sentences = pd.concat(keepers)
dat_sentences = dat_sentences[['idx','pos','txt']]
dat_sentences['txt'] = dat_sentences.txt.str.replace('\n','')
dat_sentences = df.iloc[idx_incubation][['source','title','doi','date']].rename_axis('idx').reset_index().merge(
dat_sentences,on='idx',how='right')
dat_sentences.to_csv(os.path.join(dir_output,'sentence_flag.csv'),index=False)
df_moments = pd.read_csv(os.path.join(dir_data,'sentence_flag.csv'))
df_txt = df_moments[['title','pos','txt']].copy()
df_moments.drop(columns = ['pos','txt'],inplace=True)
df_moments['date'] = pd.to_datetime(df_moments.date)
moments = df_moments.moments.str.split('\;',expand=True).reset_index().melt('index')
moments = moments[moments.value.notnull()].reset_index(drop=True).drop(columns='variable')
tmp = moments.value.str.split('\=',expand=True)
moments = moments.drop(columns='value').assign(moment=tmp.iloc[:,0], val=tmp.iloc[:,1].astype(float))
df_moments = df_moments.drop(columns='moments').reset_index().merge(moments,on='index',how='right').drop(columns='index')
# Print off key sentences
print('A total of %i unique studies' % (df_moments.title.unique().shape[0]) )
print('\n\n')
for ii, rr in df_txt.iterrows():
print('----- Article: %s -----' % rr['title'] )
idx = [int(z) for z in re.findall(r'\d+', rr['pos'])]
idx = np.array(idx).reshape([int(len(idx) / 2), 2])
idx = [tuple(idx[i]) for i in range(idx.shape[0])]
sentence = rr['txt']
idx_sentence = (idx,sentence)
color_printer(idx_sentence)
print('\n')
di_moments = {'lb':'Lower-bound','ub':'Upper-bound','mu':'Mean','med':'Median',
'q2':'25th percentile','q3':'75th percentile'}
# Plot the moments over time
g = sns.FacetGrid(data=df_moments.assign(moment=lambda x: x.moment.map(di_moments)),
col='moment',col_wrap=3,sharex=True,sharey=False,height=4,aspect=1)
g.map(sns.lineplot,'date','val',ci=None)
g.map(sns.scatterplot,'date','val')
g.set_xlabels('');g.set_ylabels('Days')
g.fig.suptitle(t='Figure: Estimate of Incubation period moments over time',size=16,weight='bold')
g.fig.subplots_adjust(top=0.85)
for ax in g.axes.flat:
ax.set_title(ax.title._text.replace('moment = ', ''))
# dates = [dt.strftime(dt.strptime(z,'%Y-%m-%d'),'%b-%d, %y') for z in dates]
xticks = [737425., 737439., 737456., 737470., 737485., 737499.]
lbls = ['Jan-01, 20', 'Jan-15, 20', 'Feb-01, 20', 'Feb-15, 20', 'Mar-01, 20', 'Mar-15, 20']
g.set_xticklabels(rotation=45,labels=lbls)
g.set(xticks = xticks)
ave = df_moments.groupby('moment').val.mean().reset_index().rename(columns={'moment':'Moment','val':'Average'}).assign(Moment=lambda x: x.Moment.map(di_moments))
print(np.round(ave,1))
# Get the index
df_match = df_txt.merge(df,on='title',how='left').rename(columns={'txt_x':'sentence','txt_y':'txt_full'})
for jj, rr in df_match.iterrows():
try:
dd = rr['date'].strftime('%B %d, %Y')
except:
dd = 'NaN'
corpus = rr['abstract'] + '. ' + rr['txt_full']
peds_sentences = sentence_find(corpus, pat_peds)
incubation_sentences = sentence_find(corpus, pat_incubation)
if len(peds_sentences) > 0 and len(incubation_sentences) > 0:
print('---- Title: %s, date: %s (%i of %i) ----' %
(rr['title'], dd, jj+1, df_match.shape[0]))
for ii_ss in peds_sentences + incubation_sentences:
color_printer(ii_ss)
print('\n')
from PIL import Image
from matplotlib import pyplot as plt
image = Image.open(os.path.join(dir_data,"age_incubation.png"))
fig, ax = plt.subplots(figsize=(18,9))
ax.imshow(image)
fig.suptitle("Figure 3: from (Han 2020) ", fontsize=18,weight='bold')
fig.subplots_adjust(top=1.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1 (sensitivity check, run 1)
Step1: Do some preprocessing to group the data by 'Anon Stud Id' and extract features for further analysis
Step2: Note to reviewers
Step3: Reading from the scratch instead
Step4: Making backup for stud_data in HDF5 format
Step5: Choosing a student subset for a sensitivity check
Step6: Clustering
Step7: Choose the pair of columns with best score
Step8: Visualising the KMeans clusters
Step9: Then, consider adding one more column to further increase the score
Step10: As expected, the pair ['frac_1s_hints', 'max_probl_views'] still gives the best score.
Step11: GaussianMixture
Step12: AgglomerativeClustering
Step13: Visualising the AgglomerativeClustering clusters
Step14: Further clustering of obtained KMeans groups
Step15: Visualise obtained clusters
Step16: As we see, group 01 contains more students with "gaming" behaviour, so I proceed with group 00
Step17: So, there is a subgroup 001 of 1001 students that do not use many hints. What about the rest (000, 5482 students)?
Step18: Splitting group 0000 (students with large 'num_sess' and 'num_probs')
Step19: As we see, these two groups represent students with "intermediate experience" (00000) and "largest experience" (00001).
|
<ASSISTANT_TASK:>
Python Code:
# Select test_size and random_state for splitting a subset
test_size=0.1
random_state=0
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import time
import gzip
import shutil
import seaborn as sns
from collections import Counter
from sklearn.mixture import GaussianMixture
from sklearn.cluster import KMeans, MeanShift, estimate_bandwidth, AgglomerativeClustering
from sklearn.metrics import silhouette_score #, make_scorer
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.neighbors import kneighbors_graph
from sklearn.model_selection import train_test_split
def hdf_fixed_write_compress(df):
df.to_hdf('data1-step1.hdf','test',mode='w',complib='blosc')
return
def hdf_fixed_read_compress():
df = pd.read_hdf('data.hdf','test')
return df
with gzip.open('data1.hdf.gz', 'rb') as f_in, open('data.hdf', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
!ls -lh data.hdf
data = hdf_fixed_read_compress()
data.head()
def prepare_stud_data_new(df):
start_time = time.time()
stud_list = df['Anon Student Id'].unique()
cols=['num_sess', \
'num_days', \
'num_probs', \
'num_atts', \
'num_hints', \
'frac_corr_atts', \
'frac_3s_atts', \
'frac_1s_hints', \
'time_atts', \
'time_hints', \
'max_probl_views', \
'max_atts']
numbers = []
#stud_data = pd.DataFrame(columns=cols)
stud_info_df = pd.DataFrame()
i = 0
for stud_name in stud_list:
stud_info_df = df[df['Anon Student Id'] == stud_name].copy()
# total number of days loading the system
num_days = len(set(stud_info_df['Day']))
# total number of sessions opened
num_sessions = len(set(stud_info_df['Session Id']))
# total number of problems entered
num_problems = len(set(stud_info_df['Problem Name']))
# total number of attempts made by the student
num_attempts = stud_info_df[stud_info_df['Student Response Type'] == 0].shape[0]
# total number of hints made by the student
num_hints = stud_info_df[stud_info_df['Student Response Type'] == 1].shape[0]
# fraction of short attemps (with time <= 3 sec)
if (num_attempts > 0):
frac_3s_atts = stud_info_df[(stud_info_df['Student Response Type'] == 0) & (stud_info_df['Duration (sec)'] <= 3.0)].shape[0] / num_attempts
else:
frac_3s_atts = 0
# fraction of short hints (with time <= 1 sec)
if (num_hints > 0):
frac_1s_hints = stud_info_df[(stud_info_df['Student Response Type'] == 1) & (stud_info_df['Duration (sec)'] <= 1.0)].shape[0] / num_hints
else:
frac_1s_hints = 0
# fraction of correct attempts
if (num_attempts > 0):
fraction_correct_attempts = stud_info_df[(stud_info_df['Student Response Type'] == 0) & (stud_info_df['Outcome'] == 0)].shape[0] / num_attempts
else:
fraction_correct_attempts = 0
# total number of time spent for attempts (in seconds)
total_time_attempts = stud_info_df[stud_info_df['Student Response Type'] == 0]['Duration (sec)'].sum()
# total number of time spent for hints (in seconds)
total_time_hints = stud_info_df[stud_info_df['Student Response Type'] == 1]['Duration (sec)'].sum()
# averaged maximal numbers of 'Problem View'
avg_max_problem_views = stud_info_df[['Problem Name', 'Problem View']].groupby(['Problem Name']).agg(np.max).mean()[0]
# averaged maximal number of attempts ('x')
avg_max_attempts = stud_info_df[['Problem Name', 'x']].groupby(['Problem Name']).agg(np.max).mean()[0]
stud_name = i # assign unique numerical ID to each student
if num_attempts != 0:
avd_time_att = total_time_attempts / num_attempts
else:
avg_time_att = 0
if num_hints != 0:
avg_time_hint = total_time_hints / num_hints
else:
avg_time_hint = 0
numbers.append([num_sessions, \
num_days, \
num_problems, \
num_attempts, \
num_hints, \
fraction_correct_attempts, \
frac_3s_atts, \
frac_1s_hints, \
total_time_attempts, \
total_time_hints, \
avg_max_problem_views, \
avg_max_attempts])
print("\r\t>>> Progress\t:{:.4%}".format((i + 1)/len(stud_list)), end='')
i += 1
stud_data = pd.DataFrame(data=numbers, columns=cols)
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
return stud_data
#stud_data = prepare_stud_data_new(data.head(500000).copy())
#stud_data = prepare_stud_data_new(data.copy())
stud_data = pd.read_hdf('stud_data.hdf','test')
#stud_data.to_hdf('stud_data.hdf','test',mode='w',complib='blosc')
stud_data.shape
stud_data.describe()
print(test_size, random_state)
stud_data_1, stud_data_2 = train_test_split(stud_data, test_size=test_size, random_state=random_state)
stud_data_1.shape[0]/stud_data.shape[0]
stud_data = stud_data_1
# old name: process_data
def transform_data(selected_columns, data):
'''
Apply log-transform and MinMaxScaler() to the selected data columns which are not fractions (frac_*)
Parameters
==========
selected_columns : list
list of columns to leave in processed data
data : pandas.DataFrame
data to process (note that data should contain all selected_columns)
Returns
=======
log_scaled_data : pandas.DataFrame
log-transformed and scaled data selected by selected_columns
'''
data.reset_index(drop=True, inplace=True)
log_data = data[selected_columns].copy()
skewed = log_data.columns.tolist()
skewed = [item for item in skewed if not item.startswith('frac_')]
log_data[skewed] = log_data[skewed].apply(lambda x: np.log10(x + 1))
scaler = MinMaxScaler().fit(log_data)
log_scaled_data = scaler.transform(log_data)
log_scaled_data = pd.DataFrame(log_scaled_data, columns=log_data.columns)
return log_scaled_data
def replace_group_numbers(best_preds):
'''
Replace group numbers in best_preds with sorting by group size
(so that the largest group is 0, the second largest is 1 etc.)
Parameters
==========
best_preds : numpy array
unsorted array of predictions
Returns
=======
best_preds_sorted : numpy array
sorted array of predictions
'''
pp = pd.DataFrame(best_preds, columns = ["old_group"])
dict_pp = {item[0]: i for i, item in enumerate(Counter(best_preds).most_common())}
pp['new_group'] = pp['old_group'].replace(dict_pp)
best_preds_sorted = np.array(pp['new_group'])
return best_preds_sorted
def kmeans(log_scaled_data):
'''
Apply KMeans clustering algorithm with 2 <= cluster_number <= 6 to log_scaled_data
(transformed and scaled by transform_data() function)
Parameters
==========
log_scaled_data : pandas.DataFrame
data log-transormed and MinMaxScaler()-ed for KMeans clustering
Returns
=======
best_clusterer : sklearn Model
clustering algorithm with the largest Silhouette Coefficient
best_score : float
the largest value of the Silhouette Coefficient
best_preds_sorted : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
'''
best_score = 0
for n_clusters in range(2,6):
clusterer = KMeans(n_clusters=n_clusters, n_init=10, random_state=0)
clusterer.fit(log_scaled_data)
preds = clusterer.predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
if best_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = best_clusterer.predict(log_scaled_data)
best_score = score
best_clusters = n_clusters
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, best_score, best_preds_sorted
all_columns = ['num_sess', 'num_days', 'num_probs', 'num_atts', 'num_hints', 'frac_corr_atts', \
'frac_3s_atts', 'frac_1s_hints', 'time_atts', 'time_hints', 'max_probl_views', 'max_atts']
def choose_pair_columns_kmeans(all_columns, log_scaled_all_data):
'''
Selects pair of columns in data that produces clusters with the largest score.
In this function, only KMeans clustering algorithm is used
Parameters
==========
all_columns : list
list of columns to look for the pair with the largest score
log_scaled_data : pandas DataFrame
properly scaled DataFrame with all columns
Returns
=======
best_columns : list
pair of data columns with the largest score
best_score : float
the largest value of the score
best_clusterer : sklearn Model
clustering algorithm with the largest score
best_preds : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
'''
best_score = 0
best_columns = []
j = 0
l = len(all_columns)
num_pairs = (l-1)*l/2
for column in all_columns:
selected_columns = [column]
columns_to_add = [a for a in all_columns if (a not in selected_columns)]
for column1 in columns_to_add:
if all_columns.index(column) < all_columns.index(column1):
selected_columns = [column, column1]
print("\r\t>>> Progress\t:{:.4%}".format((j+1)/num_pairs), end='')
j += 1
#log_scaled_data = transform_data(selected_columns, stud_data)
clusterer, score, preds = kmeans(log_scaled_all_data[selected_columns])
if score > best_score:
best_score = score
best_clusterer = clusterer
best_preds = preds
best_columns = selected_columns.copy()
return best_columns, best_score, best_clusterer, best_preds
start_time = time.time()
log_scaled_all_data = transform_data(all_columns, stud_data)
# consider skipping the step below because it takes some time (~5 minutes)
best_columns, best_kmeans_score, best_kmeans_clusterer, best_kmeans_preds = choose_pair_columns_kmeans(all_columns, log_scaled_all_data)
# Instead run it single time (6 seconds only)
#best_columns = ['frac_1s_hints', 'max_probl_views']
#best_kmeans_clusterer, best_kmeans_score, best_kmeans_preds = kmeans(log_scaled_all_data[best_columns])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_columns)
print("\t>>> Best score:", best_kmeans_score)
print("\t>>> Best clusterer:", best_kmeans_clusterer)
print("\t>>> Best preds:", best_kmeans_preds)
def preds_to_indices(preds): # gives array and returns array of indices with 1s
new_list = []
for i, val in enumerate(preds):
if val == 1:
new_list.append(i)
return np.array(new_list)
log_scaled_all_data.describe()
best_kmeans_preds_mask = preds_to_indices(best_kmeans_preds)
log_scaled_all_data_kmeans_0 = log_scaled_all_data.copy()[~log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
log_scaled_all_data_kmeans_1 = log_scaled_all_data.copy()[log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
plt.scatter(log_scaled_all_data_kmeans_0['frac_1s_hints'], \
log_scaled_all_data_kmeans_0['max_probl_views'], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_1['frac_1s_hints'], \
log_scaled_all_data_kmeans_1['max_probl_views'], \
alpha=0.6, s=15, c='grey')
plt.xlim([0.0, 0.6])
plt.ylim([0.0, 0.4])
plt.figtext(x=0.64, y=0.56, s='Group 1', ha='center', size=14, color='black')
plt.figtext(x=0.20, y=0.19, s='Group 0', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel('frac_1s_hints', size=14)
ax.set_ylabel('max_probl_views', size=14)
plt.plot((0.14, 0.14), (0.001, 0.399), 'k--', c='blue')
plt.show()
print(log_scaled_all_data_kmeans_0.shape, log_scaled_all_data_kmeans_1.shape)
def cols_iterate_kmeans(selected_columns, best_score, best_clusterer, best_preds):
all_columns = ['num_sess', 'num_days', 'num_probs', 'num_atts', \
'num_hints', 'frac_corr_atts', 'frac_3s_atts', 'frac_1s_hints', \
'time_atts', 'time_hints', 'max_probl_views', 'max_atts']
columns_to_add = [a for a in all_columns if (a not in selected_columns)]
#print(columns_to_add)
for column in columns_to_add:
print("*"*40)
print("*** Trying to add column", column)
print("*"*40)
selected_columns.append(column)
log_scaled_data = transform_data(selected_columns, stud_data)
clusterer, score, preds = kmeans(log_scaled_data)
if score > best_score:
print("!!! Success !!!")
best_score = score
best_clusterer = clusterer
best_preds = preds
print("!!! New score is", best_score)
print("!!! New best clusterer is", best_clusterer)
print("!!! New best selected_columns are", selected_columns)
columns_to_add.remove(column)
else:
print("!!! Last score is equal or worse then our best one")
print("!!! According to Occam's razor, remove the column", column)
selected_columns.remove(column)
print("!!! Still the best selected columns are", selected_columns)
return selected_columns, best_score, best_clusterer, best_preds
# Just skip this step, it does not give new results:
kmeans_clusterer = best_kmeans_clusterer
kmeans_score = best_kmeans_score
kmeans_preds = best_kmeans_preds
selected_columns = best_columns # ['frac_1s_hints', 'max_probl_views']
new_columns, new_kmeans_score, new_kmeans_clusterer, new_kmeans_preds = cols_iterate_kmeans(selected_columns, kmeans_score, kmeans_clusterer, kmeans_preds)
if new_kmeans_score > kmeans_score:
print("+++ SUCCESS")
selected_columns = new_columns
best_kmeans_score = new_kmeans_score
best_kmeans_clusterer = new_kmeans_clusterer
best_kmeans_preds = new_kmeans_preds
else:
print("--- GIVE UP")
def largest_cluster_fraction(preds):
'''
calculates the fraction of students that are in the largest group
Parameters
==========
preds : list
list of predictions
Returns
=======
fraction : float
largest fraction of students
best_i : integer
number of the largest group
'''
fraction = 0
ll = len(preds)
for i in np.unique(preds):
frac = len(preds[preds == i])/ll
if frac > fraction:
fraction = frac
best_i = i
return fraction, best_i
# Rewrite similar to kmeans procedure !!!
def meanshift(log_scaled_data):
'''
Apply MeanShift clustering algorithm to log_scaled_data
(transformed and scaled by transform_data() function)
Number of clusters is selected according to estimate_badwidth procedure
with quantiles in np.linspace(0.01, 0.99, 99)
Parameters
==========
log_scaled_data : pandas.DataFrame
data log-transormed and MinMaxScaler()-ed for KMeans clustering
Returns
=======
best_clusterer : sklearn Model
clustering algorithm with the largest Silhouette Coefficient
best_score : float
the largest value of the Silhouette Coefficient
best_preds_sorted : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
cluster_frac : float
fraction of students inside the largest group
'''
start_time = time.time()
best_score = 0
best_cluster_frac = 0
for alpha in np.linspace(0.01, 0.99, 99):
bandwidth = estimate_bandwidth(log_scaled_data, quantile=alpha, n_samples=None, random_state=0)
clusterer = MeanShift(bandwidth=bandwidth, bin_seeding=True)
clusterer.fit(log_scaled_data)
preds = clusterer.fit_predict(log_scaled_data)
cluster_frac = largest_cluster_fraction(preds)[0]
# Calculate the mean silhouette coefficient for the number of clusters chosen
try:
score = silhouette_score(log_scaled_data, preds)
except ValueError:
score = 0
print(alpha, clusterer.cluster_centers_.shape[0], score, cluster_frac)
# setting cluster_frac > 0.85, the value obtained in KMeans algorithm for ['frac_1s_hints', 'max_probl_views']
if (best_score < score) and (cluster_frac < 0.85):
best_clusterer = clusterer
best_preds = preds
best_score = score
best_clusters = clusterer.cluster_centers_.shape[0]
best_cluster_frac = cluster_frac
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", best_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
cluster_frac = best_cluster_frac
return best_clusterer, best_score, best_preds_sorted, cluster_frac
# Rinning MeanShift is too slow: runs about 9 min for 1 pair,
# and produces too bad results (largest score = 0.56 for reasonable max_fractions < 0.85)
start_time = time.time()
log_scaled_data = transform_data(best_columns, stud_data)
best_meanshift_clusterer, best_meanshift_score, best_meanshift_preds, _ = meanshift(log_scaled_data)
print(best_meanshift_clusterer, best_meanshift_score, best_meanshift_preds)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
def gaussmix(log_scaled_data): # GaussianMixture
start_time = time.time()
max_score = 0
for n_clusters in range(2,6):
clusterer = GaussianMixture(random_state=0, n_init=50, n_components=n_clusters).fit(log_scaled_data)
preds = clusterer.predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
print("For our model with", clusterer.n_components, "clusters, the sihlouette score is", score)
if max_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = best_clusterer.predict(log_scaled_data)
max_score = score
best_clusters = n_clusters
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", max_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, max_score, best_preds_sorted
def run_clustering_gaussmix(log_scaled_data):
best_score = 0
print(">>> GaussianMixture:")
clusterer, score, preds = gaussmix(log_scaled_data)
if score > best_score:
best_clusterer = clusterer
best_score = score
best_preds = preds
print("Best clusterer is", best_clusterer)
print("Max score is", best_score)
print("Best preds is", best_preds)
return best_clusterer, best_score, best_preds
# ~0.6 min running time but very small score (~0.39)
start_time = time.time()
log_scaled_data = transform_data(best_columns, stud_data)
gaussmix_best_clusterer, gaussmix_best_score, gaussmix_best_preds = run_clustering_gaussmix(log_scaled_data)
print(gaussmix_best_clusterer, gaussmix_best_score, gaussmix_best_preds)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
def agglom(log_scaled_data): # AgglomerativeClustering with 'ward' connectivity
start_time = time.time()
max_score = 0
for n_clusters in range(2,3): # use only 2 clusters
connectivity = kneighbors_graph(log_scaled_data, n_neighbors=100, include_self=False)
# make connectivity symmetric
connectivity = 0.5 * (connectivity + connectivity.T)
clusterer = AgglomerativeClustering(n_clusters=n_clusters, \
linkage='ward', \
connectivity=connectivity)
preds = clusterer.fit_predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
print("For our model with", clusterer.n_clusters, "clusters, and the sihlouette score is", score)
if max_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = preds
max_score = score
best_clusters = n_clusters
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", max_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, max_score, best_preds_sorted
def run_clustering_agglom(log_scaled_data):
best_score = 0
print(">>> AgglomerativeClustering:")
clusterer, score, preds = agglom(log_scaled_data)
if score > best_score:
best_clusterer = clusterer
best_score = score
best_preds = preds
print("Best clusterer is", best_clusterer)
print("Max score is", best_score)
print("Best preds is", best_preds)
return best_clusterer, best_score, best_preds
# Gives results very similar to KMeans but takes ~4 times more running time
start_time = time.time()
log_scaled_data = transform_data(best_columns, stud_data)
best_agglom_clusterer, best_agglom_score, best_agglom_preds = run_clustering_agglom(log_scaled_data)
print(best_agglom_clusterer, best_agglom_score, best_agglom_preds)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
best_agglom_preds_mask = preds_to_indices(best_agglom_preds)
log_scaled_data_agglom_0 = log_scaled_data.copy()[~log_scaled_data.index.isin(best_agglom_preds_mask)]
log_scaled_data_agglom_1 = log_scaled_data.copy()[log_scaled_data.index.isin(best_agglom_preds_mask)]
plt.scatter(log_scaled_data_agglom_0['frac_1s_hints'], \
log_scaled_data_agglom_0['max_probl_views'], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_data_agglom_1['frac_1s_hints'], \
log_scaled_data_agglom_1['max_probl_views'], \
alpha=0.6, s=15, c='grey')
plt.xlim([0.0, 0.6])
plt.ylim([0.0, 0.4])
plt.figtext(x=0.64, y=0.56, s='Group 1', ha='center', size=14, color='black')
plt.figtext(x=0.20, y=0.19, s='Group 0', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel('frac_1s_hints', size=14)
ax.set_ylabel('max_probl_views', size=14)
#plt.plot((0.145, 0.145), (0.001, 0.399), 'k--', c='blue')
plt.show()
best_kmeans_preds_mask = preds_to_indices(best_kmeans_preds)
log_scaled_all_data_kmeans_0 = log_scaled_all_data.copy()[~log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
# In this particular splitting, take drop=False to save the initial index
# (simplifying students recovery for step 2)
log_scaled_all_data_kmeans_0.reset_index(inplace=True, drop=False)
log_scaled_all_data_kmeans_0.index
start_time = time.time()
best_kmeans_columns_0, \
best_kmeans_score_0, \
best_kmeans_clusterer_0, \
best_kmeans_preds_0 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_0)
# best_kmeans_columns_0 = ['frac_3s_atts', 'max_probl_views']
# best_kmeans_clusterer_0, best_kmeans_score_0, best_kmeans_preds_0 = kmeans(log_scaled_all_data_kmeans_0[best_kmeans_columns_0])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_0)
print("\t>>> Best score:", best_kmeans_score_0)
print("\t>>> Best clusterer:", best_kmeans_clusterer_0)
print("\t>>> Best preds:", best_kmeans_preds_0)
print(sum(best_kmeans_preds_0), len(best_kmeans_preds_0), len(best_kmeans_preds_0[best_kmeans_preds_0 == 0]))
log_scaled_all_data_kmeans_0.reset_index(inplace=True, drop=True)
best_kmeans_preds_mask_0 = preds_to_indices(best_kmeans_preds_0)
log_scaled_all_data_kmeans_00 = log_scaled_all_data_kmeans_0.copy()[~log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
log_scaled_all_data_kmeans_01 = log_scaled_all_data_kmeans_0.copy()[log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
plt.scatter(log_scaled_all_data_kmeans_00[best_kmeans_columns_0[0]], \
log_scaled_all_data_kmeans_00[best_kmeans_columns_0[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_01[best_kmeans_columns_0[0]], \
log_scaled_all_data_kmeans_01[best_kmeans_columns_0[1]], \
alpha=0.6, s=15, c='grey')
# plt.xlim([0.0, 0.6])
# plt.ylim([0.0, 0.4])
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_0[0], size=14)
ax.set_ylabel(best_kmeans_columns_0[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
len(best_kmeans_preds_0)
#best_kmeans_preds_mask_0 = preds_to_indices(best_kmeans_preds_0) # already implemented during group0 visualisation
log_scaled_all_data_kmeans_00 = log_scaled_all_data_kmeans_0.copy()[~log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
log_scaled_all_data_kmeans_00.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_00.index
start_time = time.time()
best_kmeans_columns_00, \
best_kmeans_score_00, \
best_kmeans_clusterer_00, \
best_kmeans_preds_00 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_00)
# best_kmeans_columns_00 = ['frac_3s_atts', 'time_hints']
# best_kmeans_clusterer_00, \
# best_kmeans_score_00, \
# best_kmeans_preds_00 = kmeans(log_scaled_all_data_kmeans_00[best_kmeans_columns_00])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_00)
print("\t>>> Best score:", best_kmeans_score_00)
print("\t>>> Best clusterer:", best_kmeans_clusterer_00)
print("\t>>> Best preds:", best_kmeans_preds_00)
print(sum(best_kmeans_preds_00), len(best_kmeans_preds_00), len(best_kmeans_preds_00[best_kmeans_preds_00 == 0]))
best_kmeans_preds_mask_00 = preds_to_indices(best_kmeans_preds_00)
log_scaled_all_data_kmeans_000 = log_scaled_all_data_kmeans_00.copy()[~log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
log_scaled_all_data_kmeans_001 = log_scaled_all_data_kmeans_00.copy()[log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
plt.scatter(log_scaled_all_data_kmeans_000[best_kmeans_columns_00[0]], \
log_scaled_all_data_kmeans_000[best_kmeans_columns_00[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_001[best_kmeans_columns_00[0]], \
log_scaled_all_data_kmeans_001[best_kmeans_columns_00[1]], \
alpha=0.6, s=15, c='grey')
# plt.xlim([0.0, 0.6])
# plt.ylim([0.0, 0.4])
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_00[0], size=14)
ax.set_ylabel(best_kmeans_columns_00[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
log_scaled_all_data_kmeans_000 = log_scaled_all_data_kmeans_00.copy()[~log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
log_scaled_all_data_kmeans_000.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_000.index
start_time = time.time()
best_kmeans_columns_000, \
best_kmeans_score_000, \
best_kmeans_clusterer_000, \
best_kmeans_preds_000 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_000)
# best_kmeans_columns_000 = ['num_sess', 'num_probs']
# best_kmeans_clusterer_000, \
# best_kmeans_score_000, \
# best_kmeans_preds_000 = kmeans(log_scaled_all_data_kmeans_000[best_kmeans_columns_000])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_000)
print("\t>>> Best score:", best_kmeans_score_000)
print("\t>>> Best clusterer:", best_kmeans_clusterer_000)
print("\t>>> Best preds:", best_kmeans_preds_000)
print(sum(best_kmeans_preds_000), len(best_kmeans_preds_000), len(best_kmeans_preds_000[best_kmeans_preds_000 == 0]))
best_kmeans_preds_mask_000 = preds_to_indices(best_kmeans_preds_000)
log_scaled_all_data_kmeans_0000 = log_scaled_all_data_kmeans_000.copy()[~log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
log_scaled_all_data_kmeans_0001 = log_scaled_all_data_kmeans_000.copy()[log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
plt.scatter(log_scaled_all_data_kmeans_0000[best_kmeans_columns_000[0]], \
log_scaled_all_data_kmeans_0000[best_kmeans_columns_000[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_0001[best_kmeans_columns_000[0]], \
log_scaled_all_data_kmeans_0001[best_kmeans_columns_000[1]], \
alpha=0.6, s=15, c='grey')
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_000[0], size=14)
ax.set_ylabel(best_kmeans_columns_000[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
log_scaled_all_data_kmeans_0000 = log_scaled_all_data_kmeans_000.copy()[~log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
log_scaled_all_data_kmeans_0000.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_0000.index
start_time = time.time()
best_kmeans_columns_0000, \
best_kmeans_score_0000, \
best_kmeans_clusterer_0000, \
best_kmeans_preds_0000 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_0000)
# best_kmeans_columns_0000 = ['num_sess', 'num_probs']
# best_kmeans_clusterer_0000, \
# best_kmeans_score_0000, \
# best_kmeans_preds_0000 = kmeans(log_scaled_all_data_kmeans_0000[best_kmeans_columns_0000])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_0000)
print("\t>>> Best score:", best_kmeans_score_0000)
print("\t>>> Best clusterer:", best_kmeans_clusterer_0000)
print("\t>>> Best preds:", best_kmeans_preds_0000)
print(sum(best_kmeans_preds_0000), \
len(best_kmeans_preds_0000), \
len(best_kmeans_preds_0000[best_kmeans_preds_0000 == 0]))
best_kmeans_preds_mask_0000 = preds_to_indices(best_kmeans_preds_0000)
log_scaled_all_data_kmeans_00000 = log_scaled_all_data_kmeans_0000.copy()[~log_scaled_all_data_kmeans_0000.index.isin(best_kmeans_preds_mask_0000)]
log_scaled_all_data_kmeans_00001 = log_scaled_all_data_kmeans_0000.copy()[log_scaled_all_data_kmeans_0000.index.isin(best_kmeans_preds_mask_0000)]
plt.scatter(log_scaled_all_data_kmeans_00000[best_kmeans_columns_0000[0]], \
log_scaled_all_data_kmeans_00000[best_kmeans_columns_0000[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_00001[best_kmeans_columns_0000[0]], \
log_scaled_all_data_kmeans_00001[best_kmeans_columns_0000[1]], \
alpha=0.6, s=15, c='grey')
# plt.xlim([0.0, 0.6])
# plt.ylim([0.0, 0.4])
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_0000[0], size=14)
ax.set_ylabel(best_kmeans_columns_0000[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
group1_index = np.array(log_scaled_all_data_kmeans_1.index)
len(group1_index)
group2_index = np.array(log_scaled_all_data_kmeans_01['index'])
len(group2_index)
group3_index = np.array(log_scaled_all_data_kmeans_001['index'])
len(group3_index)
group4_index = np.array(log_scaled_all_data_kmeans_0001['index'])
len(group4_index)
group5_index = np.array(log_scaled_all_data_kmeans_00000['index'])
len(group5_index)
group6_index = np.array(log_scaled_all_data_kmeans_00001['index'])
len(group6_index)
def create_joint_cluster_index():
'''
Saves group index files into cluster_index.csv for further analysis
'''
cluster_index_lst = []
for i in range(len(stud_data)+1):
if i in group1_index:
cluster_index_lst.append(1)
elif i in group2_index:
cluster_index_lst.append(2)
elif i in group3_index:
cluster_index_lst.append(3)
elif i in group4_index:
cluster_index_lst.append(4)
elif i in group5_index:
cluster_index_lst.append(5)
elif i in group6_index:
cluster_index_lst.append(6)
print(Counter(cluster_index_lst))
cluster_index = pd.Series(cluster_index_lst, dtype=int)
cluster_index.to_csv('cluster_index_run1.csv')
return
create_joint_cluster_index()
! ls -lh cluster_index_run1.csv
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run a simulation
Step2: Group mutation trajectories by position and effect size
Step3: The only fixation has an 'esize' $> 0$, which means that it was positively selected,
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%pylab inline
import fwdpy as fp
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import copy
nregions = [fp.Region(0,1,1),fp.Region(2,3,1)]
sregions = [fp.ExpS(1,2,1,-0.1),fp.ExpS(1,2,0.01,0.001)]
rregions = [fp.Region(0,3,1)]
rng = fp.GSLrng(101)
popsizes = np.array([1000],dtype=np.uint32)
popsizes=np.tile(popsizes,10000)
#Initialize a vector with 1 population of size N = 1,000
pops=fp.SpopVec(1,1000)
#This sampler object will record selected mutation
#frequencies over time. A sampler gets the length
#of pops as a constructor argument because you
#need a different sampler object in memory for
#each population.
sampler=fp.FreqSampler(len(pops))
#Record mutation frequencies every generation
#The function evolve_regions sampler takes any
#of fwdpy's temporal samplers and applies them.
#For users familiar with C++, custom samplers will be written,
#and we plan to allow for custom samplers to be written primarily
#using Cython, but we are still experimenting with how best to do so.
rawTraj=fp.evolve_regions_sampler(rng,pops,sampler,
popsizes[0:],0.001,0.001,0.001,
nregions,sregions,rregions,
#The one means we sample every generation.
1)
rawTraj = [i for i in sampler]
#This example has only 1 set of trajectories, so let's make a variable for thet
#single replicate
traj=rawTraj[0]
print traj.head()
print traj.tail()
print traj.freq.max()
mfreq = traj.groupby(['pos','esize']).max().reset_index()
#Print out info for all mutations that hit a frequency of 1 (e.g., fixed)
mfreq[mfreq['freq']==1]
#Get positions of mutations that hit q = 1
mpos=mfreq[mfreq['freq']==1]['pos']
#Frequency trajectories of fixations
fig = plt.figure()
ax = plt.subplot(111)
plt.xlabel("Time (generations)")
plt.ylabel("Mutation frequency")
ax.set_xlim(traj['generation'].min(),traj['generation'].max())
for i in mpos:
plt.plot(traj[traj['pos']==i]['generation'],traj[traj['pos']==i]['freq'])
#Let's get histogram of effect sizes for all mutations that did not fix
fig = plt.figure()
ax = plt.subplot(111)
plt.xlabel(r'$s$ (selection coefficient)')
plt.ylabel("Number of mutations")
mfreq[mfreq['freq']<1.0]['esize'].hist()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bulk Hamiltonian with wraparound
Step2: First we need to create the lattice from the tight-binding model and define the translation symmetries.
Step3: Now we define a Builder with these symmetries
Step4: We give the system an "infinite" shape. This needs to be done before adding the hoppings, because on-site energies and hoppings are added only to existing sites.
Step5: Now we can add the hoppings. This modifies the model in-place.
Step6: Finally, use wraparound to finalize the bulk system
Step7: To see that the two models are the same, we plot the bands along some line. Note that the periodicity of the k-vector is $1$ in TBmodels, but $2\pi$ in kwant. The k-vector needs to be scaled accordingly.
Step8: Numerical and visual test for equivalence
Step9: Finite wire with leads
Step10: Now we define a shape for the wire - for simplicity we use a square.
Step11: Again, we explicitly create the lattice sites before populating the hoppings.
Step12: Now we create and attach two leads on either side. The lead must be long enough s.t. the most long-range hopping stays within the lead.
Step13: Here's how you can find out what's the longest-range hopping in a given direction
|
<ASSISTANT_TASK:>
Python Code:
import kwant
import tbmodels
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
model = tbmodels.Model.from_wannier_files(hr_file='data/wannier90_hr.dat')
lattice = model.to_kwant_lattice()
sym = kwant.TranslationalSymmetry(
lattice.vec((1, 0, 0)),
lattice.vec((0, 1, 0)),
lattice.vec((0, 0, 1))
)
kwant_sys = kwant.Builder(sym)
kwant_sys[lattice.shape(lambda p: True, (0, 0, 0))] = 0
model.add_hoppings_kwant(kwant_sys)
kwant_model = kwant.wraparound.wraparound(kwant_sys).finalized()
k_list = [(kx, 0, 0) for kx in np.linspace(0, 1, 100)]
x = range(100)
eigs_tbmodels = [model.eigenval(k) for k in k_list]
eigs_kwant = [la.eigvalsh(
kwant_model.hamiltonian_submatrix(
params={key: val for key, val in zip(['k_x', 'k_y', 'k_z'], 2 * np.pi * np.array(k))}
)
) for k in k_list]
np.isclose(eigs_tbmodels, eigs_kwant).all()
fig, ax = plt.subplots()
for band in np.array(eigs_tbmodels).T:
ax.plot(x, band, 'k')
for band in np.array(eigs_kwant).T:
ax.plot(x, band, 'b')
wire = kwant.Builder()
def shape(p):
x, y, z = p
return -20 < x < 20 and -5 < y < 5 and -5 < z < 5
wire[lattice.shape(shape, (0, 0, 0))] = 0
model.add_hoppings_kwant(wire)
kwant.plot(wire);
sym_lead = kwant.TranslationalSymmetry(lattice.vec((-5, 0, 0)))
lead = kwant.Builder(sym_lead)
def lead_shape(p):
x, y, z = p
return -5 <= x <= 0 and -5 < y < 5 and -5 < z < 5
lead[lattice.shape(lead_shape, (0, 0, 0))] = 0
model.add_hoppings_kwant(lead)
wire.attach_lead(lead);
wire.attach_lead(lead.reversed());
kwant.plot(wire);
for i, dir in enumerate(['x', 'y', 'z']):
print(dir + ':', max([abs(R[i]) for R in model.hop.keys()]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set-up BERT tokenizer
Step2: Load the data
Step3: Preprocess the data
Step4: Create the Question-Answering Model using BERT and Functional API
Step5: This code should preferably be run on Google Colab TPU runtime.
Step7: Create evaluation Callback
Step8: Train and Evaluate
|
<ASSISTANT_TASK:>
Python Code:
import os
import re
import json
import string
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tokenizers import BertWordPieceTokenizer
from transformers import BertTokenizer, TFBertModel, BertConfig
max_len = 384
configuration = BertConfig() # default parameters and configuration for BERT
# Save the slow pretrained tokenizer
slow_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
save_path = "bert_base_uncased/"
if not os.path.exists(save_path):
os.makedirs(save_path)
slow_tokenizer.save_pretrained(save_path)
# Load the fast tokenizer from saved file
tokenizer = BertWordPieceTokenizer("bert_base_uncased/vocab.txt", lowercase=True)
train_data_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json"
train_path = keras.utils.get_file("train.json", train_data_url)
eval_data_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json"
eval_path = keras.utils.get_file("eval.json", eval_data_url)
class SquadExample:
def __init__(self, question, context, start_char_idx, answer_text, all_answers):
self.question = question
self.context = context
self.start_char_idx = start_char_idx
self.answer_text = answer_text
self.all_answers = all_answers
self.skip = False
def preprocess(self):
context = self.context
question = self.question
answer_text = self.answer_text
start_char_idx = self.start_char_idx
# Clean context, answer and question
context = " ".join(str(context).split())
question = " ".join(str(question).split())
answer = " ".join(str(answer_text).split())
# Find end character index of answer in context
end_char_idx = start_char_idx + len(answer)
if end_char_idx >= len(context):
self.skip = True
return
# Mark the character indexes in context that are in answer
is_char_in_ans = [0] * len(context)
for idx in range(start_char_idx, end_char_idx):
is_char_in_ans[idx] = 1
# Tokenize context
tokenized_context = tokenizer.encode(context)
# Find tokens that were created from answer characters
ans_token_idx = []
for idx, (start, end) in enumerate(tokenized_context.offsets):
if sum(is_char_in_ans[start:end]) > 0:
ans_token_idx.append(idx)
if len(ans_token_idx) == 0:
self.skip = True
return
# Find start and end token index for tokens from answer
start_token_idx = ans_token_idx[0]
end_token_idx = ans_token_idx[-1]
# Tokenize question
tokenized_question = tokenizer.encode(question)
# Create inputs
input_ids = tokenized_context.ids + tokenized_question.ids[1:]
token_type_ids = [0] * len(tokenized_context.ids) + [1] * len(
tokenized_question.ids[1:]
)
attention_mask = [1] * len(input_ids)
# Pad and create attention masks.
# Skip if truncation is needed
padding_length = max_len - len(input_ids)
if padding_length > 0: # pad
input_ids = input_ids + ([0] * padding_length)
attention_mask = attention_mask + ([0] * padding_length)
token_type_ids = token_type_ids + ([0] * padding_length)
elif padding_length < 0: # skip
self.skip = True
return
self.input_ids = input_ids
self.token_type_ids = token_type_ids
self.attention_mask = attention_mask
self.start_token_idx = start_token_idx
self.end_token_idx = end_token_idx
self.context_token_to_char = tokenized_context.offsets
with open(train_path) as f:
raw_train_data = json.load(f)
with open(eval_path) as f:
raw_eval_data = json.load(f)
def create_squad_examples(raw_data):
squad_examples = []
for item in raw_data["data"]:
for para in item["paragraphs"]:
context = para["context"]
for qa in para["qas"]:
question = qa["question"]
answer_text = qa["answers"][0]["text"]
all_answers = [_["text"] for _ in qa["answers"]]
start_char_idx = qa["answers"][0]["answer_start"]
squad_eg = SquadExample(
question, context, start_char_idx, answer_text, all_answers
)
squad_eg.preprocess()
squad_examples.append(squad_eg)
return squad_examples
def create_inputs_targets(squad_examples):
dataset_dict = {
"input_ids": [],
"token_type_ids": [],
"attention_mask": [],
"start_token_idx": [],
"end_token_idx": [],
}
for item in squad_examples:
if item.skip == False:
for key in dataset_dict:
dataset_dict[key].append(getattr(item, key))
for key in dataset_dict:
dataset_dict[key] = np.array(dataset_dict[key])
x = [
dataset_dict["input_ids"],
dataset_dict["token_type_ids"],
dataset_dict["attention_mask"],
]
y = [dataset_dict["start_token_idx"], dataset_dict["end_token_idx"]]
return x, y
train_squad_examples = create_squad_examples(raw_train_data)
x_train, y_train = create_inputs_targets(train_squad_examples)
print(f"{len(train_squad_examples)} training points created.")
eval_squad_examples = create_squad_examples(raw_eval_data)
x_eval, y_eval = create_inputs_targets(eval_squad_examples)
print(f"{len(eval_squad_examples)} evaluation points created.")
def create_model():
## BERT encoder
encoder = TFBertModel.from_pretrained("bert-base-uncased")
## QA Model
input_ids = layers.Input(shape=(max_len,), dtype=tf.int32)
token_type_ids = layers.Input(shape=(max_len,), dtype=tf.int32)
attention_mask = layers.Input(shape=(max_len,), dtype=tf.int32)
embedding = encoder(
input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask
)[0]
start_logits = layers.Dense(1, name="start_logit", use_bias=False)(embedding)
start_logits = layers.Flatten()(start_logits)
end_logits = layers.Dense(1, name="end_logit", use_bias=False)(embedding)
end_logits = layers.Flatten()(end_logits)
start_probs = layers.Activation(keras.activations.softmax)(start_logits)
end_probs = layers.Activation(keras.activations.softmax)(end_logits)
model = keras.Model(
inputs=[input_ids, token_type_ids, attention_mask],
outputs=[start_probs, end_probs],
)
loss = keras.losses.SparseCategoricalCrossentropy(from_logits=False)
optimizer = keras.optimizers.Adam(lr=5e-5)
model.compile(optimizer=optimizer, loss=[loss, loss])
return model
use_tpu = True
if use_tpu:
# Create distribution strategy
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
strategy = tf.distribute.TPUStrategy(tpu)
# Create model
with strategy.scope():
model = create_model()
else:
model = create_model()
model.summary()
def normalize_text(text):
text = text.lower()
# Remove punctuations
exclude = set(string.punctuation)
text = "".join(ch for ch in text if ch not in exclude)
# Remove articles
regex = re.compile(r"\b(a|an|the)\b", re.UNICODE)
text = re.sub(regex, " ", text)
# Remove extra white space
text = " ".join(text.split())
return text
class ExactMatch(keras.callbacks.Callback):
Each `SquadExample` object contains the character level offsets for each token
in its input paragraph. We use them to get back the span of text corresponding
to the tokens between our predicted start and end tokens.
All the ground-truth answers are also present in each `SquadExample` object.
We calculate the percentage of data points where the span of text obtained
from model predictions matches one of the ground-truth answers.
def __init__(self, x_eval, y_eval):
self.x_eval = x_eval
self.y_eval = y_eval
def on_epoch_end(self, epoch, logs=None):
pred_start, pred_end = self.model.predict(self.x_eval)
count = 0
eval_examples_no_skip = [_ for _ in eval_squad_examples if _.skip == False]
for idx, (start, end) in enumerate(zip(pred_start, pred_end)):
squad_eg = eval_examples_no_skip[idx]
offsets = squad_eg.context_token_to_char
start = np.argmax(start)
end = np.argmax(end)
if start >= len(offsets):
continue
pred_char_start = offsets[start][0]
if end < len(offsets):
pred_char_end = offsets[end][1]
pred_ans = squad_eg.context[pred_char_start:pred_char_end]
else:
pred_ans = squad_eg.context[pred_char_start:]
normalized_pred_ans = normalize_text(pred_ans)
normalized_true_ans = [normalize_text(_) for _ in squad_eg.all_answers]
if normalized_pred_ans in normalized_true_ans:
count += 1
acc = count / len(self.y_eval[0])
print(f"\nepoch={epoch+1}, exact match score={acc:.2f}")
exact_match_callback = ExactMatch(x_eval, y_eval)
model.fit(
x_train,
y_train,
epochs=1, # For demonstration, 3 epochs are recommended
verbose=2,
batch_size=64,
callbacks=[exact_match_callback],
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Carregando um arquivo csv em um DataFrame do Pandas
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib
%matplotlib inline
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import confusion_matrix
from sklearn.cross_validation import train_test_split
import pickle
import time
time1=time.strftime('%Y-%m-%d_%H-%M-%S')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step2: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step3: PrettyTensor version
Step4: Load Data
Step5: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step6: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
Step7: Data Dimensions
Step8: Helper-function for plotting images
Step9: Plot a few images to see if data is correct
Step10: TensorFlow Graph
Step11: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is
Step12: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step13: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
Step14: Neural Network
Step15: Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Step16: Getting the Weights
Step17: Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like
Step18: Optimization Method
Step19: Performance Measures
Step20: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
Step21: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
Step22: Saver
Step23: The saved files are often called checkpoints because they may be written at regular intervals during optimization.
Step24: Create the directory if it does not exist.
Step25: This is the path for the checkpoint-file.
Step26: TensorFlow Run
Step27: Initialize variables
Step28: Execute the function now to initialize the variables.
Step29: Helper-function to perform optimization iterations
Step30: The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.
Step31: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.
Step32: Helper-function to plot example errors
Step33: Helper-function to plot confusion matrix
Step34: Helper-functions for calculating classifications
Step35: Calculate the predicted class for the test-set.
Step36: Calculate the predicted class for the validation-set.
Step37: Helper-functions for the classification accuracy
Step38: Calculate the classification accuracy on the validation-set.
Step39: Helper-function for showing the performance
Step40: Helper-function for plotting convolutional weights
Step41: Performance before any optimization
Step42: The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.
Step43: Perform 10,000 optimization iterations
Step44: The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.
Step45: Initialize Variables Again
Step46: This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.
Step47: The convolutional weights should now be different from the weights shown above.
Step48: Restore Best Variables
Step49: The classification accuracy is high again when using the variables that were previously saved.
Step50: The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.
Step51: Close TensorFlow Session
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image('images/02_network_flowchart.png')
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
tf.__version__
pt.__version__
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)
x_pretty = pt.wrap(x_image)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
y_pred_cls = tf.argmax(y_pred, dimension=1)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
save_dir = 'checkpoints/'
if not os.path.exists(save_dir):
os.makedirs(save_dir)
save_path = os.path.join(save_dir, 'best_validation')
session = tf.Session()
def init_variables():
session.run(tf.global_variables_initializer())
init_variables()
train_batch_size = 64
# Best validation accuracy seen so far.
best_validation_accuracy = 0.0
# Iteration-number for last improvement to validation accuracy.
last_improvement = 0
# Stop optimization if no improvement found in this many iterations.
require_improvement = 1000
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variables rather than local copies.
global total_iterations
global best_validation_accuracy
global last_improvement
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Increase the total number of iterations performed.
# It is easier to update it in each iteration because
# we need this number several times in the following.
total_iterations += 1
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):
# Calculate the accuracy on the training-batch.
acc_train = session.run(accuracy, feed_dict=feed_dict_train)
# Calculate the accuracy on the validation-set.
# The function returns 2 values but we only need the first.
acc_validation, _ = validation_accuracy()
# If validation accuracy is an improvement over best-known.
if acc_validation > best_validation_accuracy:
# Update the best-known validation accuracy.
best_validation_accuracy = acc_validation
# Set the iteration for the last improvement to current.
last_improvement = total_iterations
# Save all variables of the TensorFlow graph to file.
saver.save(sess=session, save_path=save_path)
# A string to be printed below, shows improvement found.
improved_str = '*'
else:
# An empty string to be printed below.
# Shows that no improvement was found.
improved_str = ''
# Status-message for printing.
msg = "Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}"
# Print it.
print(msg.format(i + 1, acc_train, acc_validation, improved_str))
# If no improvement found in the required number of iterations.
if total_iterations - last_improvement > require_improvement:
print("No improvement found in a while, stopping optimization.")
# Break out from the for-loop.
break
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
def predict_cls_test():
return predict_cls(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
def predict_cls_validation():
return predict_cls(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
def cls_accuracy(correct):
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / len(correct)
return acc, correct_sum
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
# The function returns two values but we only need the first.
correct, _ = predict_cls_validation()
# Calculate the classification accuracy and return it.
return cls_accuracy(correct)
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = cls_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Print mean and standard deviation.
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
print_test_accuracy()
plot_conv_weights(weights=weights_conv1)
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
plot_conv_weights(weights=weights_conv1)
init_variables()
print_test_accuracy()
plot_conv_weights(weights=weights_conv1)
saver.restore(sess=session, save_path=save_path)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
plot_conv_weights(weights=weights_conv1)
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set hyperparameters
Step2: Load data
Step3: Restoring words from index
Step4: Preprocess data
Step5: Design neural network architecture
Step6: Configure model
Step7: Train!
Step8: Evaluate
|
<ASSISTANT_TASK:>
Python Code:
import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from keras.layers import Embedding # new!
from keras.callbacks import ModelCheckpoint # new!
import os # new!
from sklearn.metrics import roc_auc_score, roc_curve # new!
import pandas as pd
import matplotlib.pyplot as plt # new!
%matplotlib inline
# output directory name:
output_dir = 'model_output/dense'
# training:
epochs = 4
batch_size = 128
# vector-space embedding:
n_dim = 64
n_unique_words = 5000 # as per Maas et al. (2011); may not be optimal
n_words_to_skip = 50 # ditto
max_review_length = 100
pad_type = trunc_type = 'pre'
# neural network architecture:
n_dense = 64
dropout = 0.5
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words, skip_top=n_words_to_skip)
x_train[0:6] # 0 reserved for padding; 1 would be starting character; 2 is unknown; 3 is most common word, etc.
for x in x_train[0:6]:
print(len(x))
y_train[0:6]
len(x_train), len(x_valid)
word_index = keras.datasets.imdb.get_word_index()
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["PAD"] = 0
word_index["START"] = 1
word_index["UNK"] = 2
word_index
index_word = {v:k for k,v in word_index.items()}
x_train[0]
' '.join(index_word[id] for id in x_train[0])
(all_x_train,_),(all_x_valid,_) = imdb.load_data()
' '.join(index_word[id] for id in all_x_train[0])
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_train[0:6]
for x in x_train[0:6]:
print(len(x))
' '.join(index_word[id] for id in x_train[0])
' '.join(index_word[id] for id in x_train[5])
# CODE HERE
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(Flatten())
model.add(Dense(n_dense, activation='relu'))
model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid'))
model.summary() # so many parameters!
# embedding layer dimensions and parameters:
n_dim, n_unique_words, n_dim*n_unique_words
# ...flatten:
max_review_length, n_dim, n_dim*max_review_length
# ...dense:
n_dense, n_dim*max_review_length*n_dense + n_dense # weights + biases
# ...and output:
n_dense + 1
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# 84.7% validation accuracy in epoch 2
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
model.load_weights(output_dir+"/weights.01.hdf5") # zero-indexed
y_hat = model.predict_proba(x_valid)
len(y_hat)
y_hat[0]
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
pct_auc = roc_auc_score(y_valid, y_hat)*100.0
"{:0.2f}".format(pct_auc)
float_y_hat = []
for y in y_hat:
float_y_hat.append(y[0])
ydf = pd.DataFrame(list(zip(float_y_hat, y_valid)), columns=['y_hat', 'y'])
ydf.head(10)
' '.join(index_word[id] for id in all_x_valid[0])
' '.join(index_word[id] for id in all_x_valid[6])
ydf[(ydf.y == 0) & (ydf.y_hat > 0.9)].head(10)
' '.join(index_word[id] for id in all_x_valid[489])
ydf[(ydf.y == 1) & (ydf.y_hat < 0.1)].head(10)
' '.join(index_word[id] for id in all_x_valid[927])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we choose a model and hyperparameters
Step2: Then we train the model, and use it to predict labels for data we already know
Step3: Finally, we compute the fraction of correctly labeled points
Step4: We see an accuracy score of 1.0, which indicates that 100% of points were correctly labeled by our model!
Step5: The nearest-neighbor classifier is about 90% accurate on this hold-out set, which is more inline with out expactation.
Step6: We could compute the mean of the two accuracy scores to get a better measure of the global model performance.
Step7: This gives us an even better idea of the performance of the algorithm.
Step8: Because we have 150 samples, the leave one out cross-validation yields scores for 150 trials, and the score indicates either successful (1.0) or unsuccessful (0.0) prediction.
Step9: This gives us a good impression on the performance of our model. But there is also a problem. Can you spot it?
Step10: Now let's create some data to which we will fit our model
Step11: We can now visualize our data, along with polynomial fits of several degrees
Step12: We can controll the model complexity (the degree of the polynomial), which can be any non-negative integer.
Step13: This shows precisely the behavior we expect
Step14: Learning Curves
Step15: We will duplicate the preceding code to plot the validation curve for this larger dataset; for reference let's over-plot the previous results as well
Step16: From the validation curve it is clear that the larger dataset can support a much more complicated model
Step17: This plot gives us a visual depiction of how our model responds to increasing training data.
Step18: Notice that like a normal estimator, this has not yet been applied to any data.
Step19: Now that this is fit, we can ask for the best parameters as follows
Step20: Finally, if we wish, we can use the best model and show the fit to our data using code from before
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=1)
model.fit(X, y)
y_model = model.predict(X)
from sklearn.metrics import accuracy_score
accuracy_score(y, y_model)
from sklearn.model_selection import train_test_split
# split the data with 50% in each set
X1, X2, y1, y2 = train_test_split(X, y, random_state=0, train_size=0.5)
# fit the model on one set of data
model.fit(X1, y1)
# evaluate the model on the second set of data
y2_model = model.predict(X2)
accuracy_score(y2, y2_model)
y2_model = model.fit(X1, y1).predict(X2)
y1_model = model.fit(X2, y2).predict(X1)
accuracy_score(y1, y1_model), accuracy_score(y2, y2_model)
from sklearn.model_selection import cross_val_score
cross_val_score(model, X, y, cv=5)
from sklearn.model_selection import LeaveOneOut
import numpy as np
loo = LeaveOneOut()
loo.get_n_splits(X)
scores = []
for train_index, test_index in loo.split(X):
model.fit(X[train_index], y[train_index])
scores.append(accuracy_score(y[test_index], model.predict(X[test_index])))
scores = np.array(scores)
scores
scores.mean()
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree), LinearRegression(**kwargs))
import numpy as np
def make_data(N, err=1.0, rseed=1):
# randomly sample the data
rng = np.random.RandomState(rseed)
X = rng.rand(N, 1) ** 2
y = 10 - 1. / (X.ravel() + 0.1)
if err > 0:
y += err * rng.randn(N)
return X, y
X, y = make_data(40)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set() # for beautiful plotting
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
plt.scatter(X.ravel(), y, color='black') # plot data
axis = plt.axis()
# plot ploynomials
for degree in [1, 3, 5]:
y_test = PolynomialRegression(degree).fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test, label='degree={0}'.format(degree))
plt.xlim(-0.1, 1.0)
plt.ylim(-2, 12)
plt.legend(loc='best');
from sklearn.model_selection import validation_curve
degree = np.arange(0, 21)
train_score, val_score = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree', degree, cv=10)
plt.plot(degree, np.median(train_score, 1), color='blue', label='training score')
plt.plot(degree, np.median(val_score, 1), color='red', label='validation score')
plt.legend(loc='best')
plt.ylim(0, 1)
plt.xlabel('degree')
plt.ylabel('score');
plt.scatter(X.ravel(), y)
lim = plt.axis()
y_test = PolynomialRegression(3).fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test);
plt.axis(lim);
X2, y2 = make_data(200)
plt.scatter(X2.ravel(), y2);
degree = np.arange(51)
train_score2, val_score2 = validation_curve(PolynomialRegression(), X2, y2,
'polynomialfeatures__degree', degree, cv=10)
plt.plot(degree, np.median(train_score2, 1), color='blue', label='training score')
plt.plot(degree, np.median(val_score2, 1), color='red', label='validation score')
plt.plot(degree[:train_score.shape[0]], np.median(train_score, 1), color='blue', alpha=0.3, linestyle='dashed')
plt.plot(degree[:train_score.shape[0]], np.median(val_score, 1), color='red', alpha=0.3, linestyle='dashed')
plt.legend(loc='best')
plt.ylim(0, 1)
plt.xlabel('degree')
plt.ylabel('score');
from sklearn.model_selection import learning_curve
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for i, degree in enumerate([2, 9]):
N, train_lc, val_lc = learning_curve(PolynomialRegression(degree),
X, y, cv=10,
train_sizes=np.linspace(0.3, 1, 25))
ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score')
ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score')
ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
color='gray', linestyle='dashed')
ax[i].set_ylim(0, 1)
ax[i].set_xlim(N[0], N[-1])
ax[i].set_xlabel('training size')
ax[i].set_ylabel('score')
ax[i].set_title('degree = {0}'.format(degree), size=14)
ax[i].legend(loc='best')
from sklearn.model_selection import GridSearchCV
param_grid = {'polynomialfeatures__degree': np.arange(21),
'linearregression__fit_intercept': [True, False],
'linearregression__normalize': [True, False]}
grid = GridSearchCV(PolynomialRegression(), param_grid, cv=10)
grid.fit(X, y);
grid.best_params_
model = grid.best_estimator_
plt.scatter(X.ravel(), y)
lim = plt.axis()
y_test = model.fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test);
plt.axis(lim);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
counter_words = Counter(text)
current_id = 0
vocab_to_int = {}
int_to_vocab = {}
for word in counter_words:
vocab_to_int[word] = current_id
int_to_vocab[current_id] = word
current_id += 1
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
tokens = {}
tokens['.'] = '||Period||'
tokens[','] = '||Comma||'
tokens['"'] = '||Quotation_Mark||'
tokens[';'] = '||Semicolon||'
tokens['!'] = '||Exclamation_mark||'
tokens['?'] = '||Question_mark||'
tokens['('] = '||Left_Parentheses||'
tokens[')'] = '||Right_Parentheses||'
tokens['--'] = '||Dash||'
tokens['\n'] = '||Return||'
return tokens
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, shape = (None, None), name = "input")
targets = tf.placeholder(tf.int32, shape = (None, None), name = "targets")
learning_rate = tf.placeholder(tf.float32, name = "learning_rate")
return (inputs, targets, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm] * 3)
stacked_lstm_state = tf.identity(stacked_lstm.zero_state(batch_size, tf.float32), "initial_state")
# TODO: Implement Function
return (stacked_lstm, stacked_lstm_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
state = tf.identity(state, "final_state")
# TODO: Implement Function
return outputs, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
inputs = get_embed(input_data, vocab_size, 100)
#fully_connected = tf.contrib.layers.fully_connected(inputs, rnn_size)
outputs, state = build_rnn(cell, inputs)
w_initializer = tf.truncated_normal_initializer(stddev=0.05)
z_initializer = tf.zeros_initializer()
logits = tf.contrib.layers.fully_connected(
outputs, vocab_size, weights_initializer=w_initializer,
biases_initializer=z_initializer, activation_fn=None,
)
# TODO: Implement Function
return logits, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = len(int_text)//(batch_size*seq_length)
return_stuff = np.ndarray(shape=(n_batches, 2, batch_size, seq_length), dtype=int)
skipdistance = n_batches * seq_length
for i in range(n_batches):
batch_inputid = i * seq_length
for i2 in range(batch_size):
inputid = batch_inputid + (i2 * skipdistance)
targetid = inputid + 1
return_stuff[i][0][i2] = int_text[inputid:inputid + seq_length]
return_stuff[i][1][i2] = int_text[targetid:targetid + seq_length]
return return_stuff
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 750
# Sequence Length
seq_length = 128
# Batch Size
batch_size = len(int_text) // (2 * seq_length)
# RNN Size
rnn_size = 256
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 40
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return (loaded_graph.get_tensor_by_name("input:0"),
loaded_graph.get_tensor_by_name("initial_state:0"),
loaded_graph.get_tensor_by_name("final_state:0"),
loaded_graph.get_tensor_by_name("probs:0"))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
import random
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
enumerated_probabilities = list(enumerate(probabilities))
enumerated_probabilities.sort(key=lambda x: x[1])
enumerated_probabilities.reverse()
random_index = random.random()
far = random_index
#print(enumerated_probabilities[random_index])
for prob in enumerated_probabilities:
if far >= prob[1]:
return int_to_vocab[prob[0]]
far += prob[1]
#return int_to_vocab[enumerated_probabilities[random_index][0]]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 500
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: Define start and end points
Step3: Get the cross section, and convert lat/lon to supplementary coordinates
Step4: For this example, we will be plotting potential temperature, relative humidity, and
Step5: Now, we can make the plot.
|
<ASSISTANT_TASK:>
Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.interpolate import cross_section
data = xr.open_dataset(get_test_data('narr_example.nc', False))
data = data.metpy.parse_cf().squeeze()
print(data)
start = (37.0, -105.0)
end = (35.5, -65.0)
cross = cross_section(data, start, end).set_coords(('lat', 'lon'))
print(cross)
cross['Potential_temperature'] = mpcalc.potential_temperature(
cross['isobaric'],
cross['Temperature']
)
cross['Relative_humidity'] = mpcalc.relative_humidity_from_specific_humidity(
cross['isobaric'],
cross['Temperature'],
cross['Specific_humidity']
)
cross['u_wind'] = cross['u_wind'].metpy.convert_units('knots')
cross['v_wind'] = cross['v_wind'].metpy.convert_units('knots')
cross['t_wind'], cross['n_wind'] = mpcalc.cross_section_components(
cross['u_wind'],
cross['v_wind']
)
print(cross)
# Define the figure object and primary axes
fig = plt.figure(1, figsize=(16., 9.))
ax = plt.axes()
# Plot RH using contourf
rh_contour = ax.contourf(cross['lon'], cross['isobaric'], cross['Relative_humidity'],
levels=np.arange(0, 1.05, .05), cmap='YlGnBu')
rh_colorbar = fig.colorbar(rh_contour)
# Plot potential temperature using contour, with some custom labeling
theta_contour = ax.contour(cross['lon'], cross['isobaric'], cross['Potential_temperature'],
levels=np.arange(250, 450, 5), colors='k', linewidths=2)
theta_contour.clabel(theta_contour.levels[1::2], fontsize=8, colors='k', inline=1,
inline_spacing=8, fmt='%i', rightside_up=True, use_clabeltext=True)
# Plot winds using the axes interface directly, with some custom indexing to make the barbs
# less crowded
wind_slc_vert = list(range(0, 19, 2)) + list(range(19, 29))
wind_slc_horz = slice(5, 100, 5)
ax.barbs(cross['lon'][wind_slc_horz], cross['isobaric'][wind_slc_vert],
cross['t_wind'][wind_slc_vert, wind_slc_horz],
cross['n_wind'][wind_slc_vert, wind_slc_horz], color='k')
# Adjust the y-axis to be logarithmic
ax.set_yscale('symlog')
ax.set_yticklabels(np.arange(1000, 50, -100))
ax.set_ylim(cross['isobaric'].max(), cross['isobaric'].min())
ax.set_yticks(np.arange(1000, 50, -100))
# Define the CRS and inset axes
data_crs = data['Geopotential_height'].metpy.cartopy_crs
ax_inset = fig.add_axes([0.125, 0.665, 0.25, 0.25], projection=data_crs)
# Plot geopotential height at 500 hPa using xarray's contour wrapper
ax_inset.contour(data['x'], data['y'], data['Geopotential_height'].sel(isobaric=500.),
levels=np.arange(5100, 6000, 60), cmap='inferno')
# Plot the path of the cross section
endpoints = data_crs.transform_points(ccrs.Geodetic(),
*np.vstack([start, end]).transpose()[::-1])
ax_inset.scatter(endpoints[:, 0], endpoints[:, 1], c='k', zorder=2)
ax_inset.plot(cross['x'], cross['y'], c='k', zorder=2)
# Add geographic features
ax_inset.coastlines()
ax_inset.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='k', alpha=0.2, zorder=0)
# Set the titles and axes labels
ax_inset.set_title('')
ax.set_title('NARR Cross-Section \u2013 {} to {} \u2013 Valid: {}\n'
'Potential Temperature (K), Tangential/Normal Winds (knots), '
'Relative Humidity (dimensionless)\n'
'Inset: Cross-Section Path and 500 hPa Geopotential Height'.format(
start, end, cross['time'].dt.strftime('%Y-%m-%d %H:%MZ').item()))
ax.set_ylabel('Pressure (hPa)')
ax.set_xlabel('Longitude (degrees east)')
rh_colorbar.set_label('Relative Humidity (dimensionless)')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-am4', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
DOC.set_value("AGCM")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
DOC.set_value("hydrostatic")
DOC.set_value("primitive equations")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Advection_tracer = 30 min, physics = 30 min")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("3 hours")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
DOC.set_value("present day")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
DOC.set_value("fixed grid")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
DOC.set_value("finite volumes")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
DOC.set_value("explicit")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
DOC.set_value("Other: vapour/solid/liquid")
DOC.set_value("clouds")
DOC.set_value("potential temperature")
DOC.set_value("wind components")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
DOC.set_value("sponge layer")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Zero flux")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Monotonic constraint and divergence damping")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
DOC.set_value("finite volume")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
DOC.set_value("Other: mass")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
DOC.set_value("Vorticity")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
DOC.set_value("BC (black carbon / soot)")
DOC.set_value("dust")
DOC.set_value("organic")
DOC.set_value("sea salt")
DOC.set_value("sulphate")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
DOC.set_value("wide-band model")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(18)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
DOC.set_value("layer interaction")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(10)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
DOC.set_value("vertical profile of Kz")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Donner(1993) deep cumulus")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
DOC.set_value("mass-flux")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
DOC.set_value("CAPE")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
DOC.set_value("convective momentum transport")
DOC.set_value("detrainment")
DOC.set_value("entrainment")
DOC.set_value("penetrative convection")
DOC.set_value("radiative effect of anvils")
DOC.set_value("updrafts")
DOC.set_value("vertical momentum transport")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Bretherton et al. (2004)")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
DOC.set_value("mass-flux")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
DOC.set_value("separate diagnosis")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Tiedtke (1993)")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
DOC.set_value("liquid rain")
DOC.set_value("snow")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Rotstayn (1997) and Ming et al. (2006)")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
DOC.set_value("cloud droplets")
DOC.set_value("cloud ice")
DOC.set_value("mixed phase")
DOC.set_value("water vapour deposition")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
DOC.set_value("Other: prognostic cloud area, liquid, and ice for stratiform; convective areas from cumulus parameterizations")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Tiedtke (1993) prognostic for stratiform; Donner et al. (2001) , Bretherton et al. (2004), and Wilcox and Donner (2007) for convective")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
DOC.set_value("coupled with deep")
DOC.set_value("coupled with shallow")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
DOC.set_value("effect on drag")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
DOC.set_value("statistical sub-grid scale variance")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
DOC.set_value("non-linear calculation")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
DOC.set_value("linear theory")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
DOC.set_value("single wave")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
DOC.set_value("background spectrum")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
DOC.set_value("spatially dependent")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
DOC.set_value("linear theory")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
DOC.set_value("single wave")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
DOC.set_value("transient")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("From Kopp et al. (2005, Solar Physics)")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
DOC.set_value("fixed")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(23)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
DOC.set_value("Other: via stratospheric aerosols optical thickness")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Convert a grid from one format to another
Step2: Working with Grid instances
Step3: Displaying a grid
Step4: A nicer image might include a neat-line outline, colour legend, scale bar and title. The gxgrid.figure_map() function will create a figure-style map, which can be saved to an image file using the image_file() method of the map instance.
Step5: Grid Coordinate System
Step6: Coordinate systems also contain the full coordinate system parameter information, from which you can construct coordinate systems in other applications.
Step7: Display with coordinate systems
Step8: Basic Grid Statistics
Step9: Statistics using Geosoft VVs
Step10: Grid Iterator
|
<ASSISTANT_TASK:>
Python Code:
import geosoft.gxpy.gx as gx
import geosoft.gxpy.grid as gxgrid
import geosoft.gxpy.utility as gxu
from IPython.display import Image
gxc = gx.GXpy()
url = 'https://github.com/GeosoftInc/gxpy/raw/9.3/examples/tutorial/Grids%20and%20Images/'
gxu.url_retrieve(url + 'elevation_surfer.GRD')
# open surfer grid
with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid_surfer:
# copy the grid to an ER Mapper format grid file
with gxgrid.Grid.copy(grid_surfer, 'elevation.ers(ERM)', overwrite=True) as grid_erm:
print('file:', grid_erm.file_name,
'\ndecorated:', grid_erm.file_name_decorated)
# open surfer grid, then set to None to free resources
grid_surfer = gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)')
print(grid_surfer.name)
grid_surfer = None
# open surfer grid using with
with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid_surfer:
print(grid_surfer.name)
image_file = gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)').image_file(shade=True, pix_width=500)
Image(image_file)
image_file = gxgrid.figure_map('elevation_surfer.grd(SRF;VER=V7)', title='Elevation').image_file(pix_width=800)
Image(image_file)
# define the coordinate system of the Surfer grid
with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid_surfer:
grid_surfer.coordinate_system = 'GDA94 / UTM zone 54S'
# copy the grid to an ER Mapper format grid file and the coordinate system is transferred
with gxgrid.Grid.copy(grid_surfer, 'elevation.ers(ERM)', overwrite=True) as grid_erm:
print(str(grid_erm.coordinate_system))
with gxgrid.Grid.open('elevation.ers(ERM)') as grid_erm:
print('Grid Exchange Format coordinate system:\n', grid_erm.coordinate_system.gxf)
with gxgrid.Grid.open('elevation.ers(ERM)') as grid_erm:
print('ESRI WKT format:\n', grid_erm.coordinate_system.esri_wkt)
with gxgrid.Grid.open('elevation.ers(ERM)') as grid_erm:
print('JSON format:\n', grid_erm.coordinate_system.json)
# show the grid as an image
Image(gxgrid.figure_map('elevation.ers(ERM)', features=('NEATLINE', 'SCALE', 'LEGEND', 'ANNOT_LL')).image_file(pix_width=800))
import numpy as np
# open the grid, using the with construct ensures resources are released
with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid:
# get the data in a numpy array
data_values = grid.xyzv()[:, :, 3]
# print statistical properties
print('minimum: ', np.nanmin(data_values))
print('maximum: ', np.nanmax(data_values))
print('mean: ', np.nanmean(data_values))
print('standard deviation: ', np.nanstd(data_values))
import geosoft.gxapi as gxapi
# the GXST class requires a desktop license
if gxc.entitled:
# create a gxapi.GXST instance to accumulate statistics
stats = gxapi.GXST.create()
# open the grid
with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid:
# add data from each row to the stats instance
for row in range(grid.ny):
stats.data_vv(grid.read_row(row).gxvv)
# print statistical properties
print('minimum: ', stats.get_info(gxapi.ST_MIN))
print('maximum: ', stats.get_info(gxapi.ST_MAX))
print('mean: ', stats.get_info(gxapi.ST_MEAN))
print('standard deviation: ', stats.get_info(gxapi.ST_STDDEV))
# the GXST class requires a desktop license
if gxc.entitled:
# create a gxapi.GXST instance to accumulate statistics
stats = gxapi.GXST.create()
# add each data to stats point-by-point (slow, better to use numpy or vector approach)
number_of_dummies = 0
with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid:
for x, y, z, v in grid:
if v is None:
number_of_dummies += 1
else:
stats.data(v)
total_points = grid.nx * grid.ny
# print statistical properties
print('minimum: ', stats.get_info(gxapi.ST_MIN))
print('maximum: ', stats.get_info(gxapi.ST_MAX))
print('mean: ', stats.get_info(gxapi.ST_MEAN))
print('standard deviation: ', stats.get_info(gxapi.ST_STDDEV))
print('number of dummies: ', number_of_dummies)
print('number of valid data points: ', total_points - number_of_dummies)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Modeling with MKS
Step2: The function make_cahnHilliard generates n_samples number of random microstructures, X, and the associated updated microstructures, y, after one time step y. The following cell plots one of these microstructures along with its update.
Step3: Calibrate Influence Coefficients
Step4: We are now going to calibrate the influence coefficients while varying the number of local states from 2 up to 20. Each of these models will then predict the evolution of the concentration fields. Mean square error will be used to compare the results with the testing dataset to evaluate how the MKS model's performance changes as we change the number of local states.
Step5: Next we will calibrate the influence coefficients while varying the number of local states and compute the mean squared error. The following demonstrates how to use scikit-learn's GridSearchCV to optimize n_states as a hyperparameter. Of course, the best fit is always with a larger value of n_states. Increasing this parameter does not overfit the data.
Step6: As expected, the accuracy of the MKS model monotonically increases, as we increase n_states, but accuracy doesn't improve significantly as n_states gets larger than signal digits.
Step7: Here are the first 4 influence coefficients.
Step8: Predict Microstructure Evolution
Step9: In order to move forward in time, we need to feed the concentration back into the Cahn-Hilliard simulation and the MKS model.
Step10: Let's take a look at the concentration fields.
Step11: The MKS model was able to capture the microstructure evolution with 6 local states.
Step12: Once again we are going to march forward in time by feeding the concentration fields back into the Cahn-Hilliard simulation and the MKS model.
Step13: Let's take a look at the results.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
import pymks
from pymks.datasets import make_cahn_hilliard
n = 41
n_samples = 400
dt = 1e-2
np.random.seed(99)
X, y = make_cahn_hilliard(n_samples=n_samples, size=(n, n), dt=dt)
from pymks.tools import draw_concentrations
draw_concentrations((X[0], y[0]), labels=('Input Concentration', 'Output Concentration'))
import sklearn
from sklearn.cross_validation import train_test_split
split_shape = (X.shape[0],) + (np.product(X.shape[1:]),)
X_train, X_test, y_train, y_test = train_test_split(X.reshape(split_shape), y.reshape(split_shape),
test_size=0.5, random_state=3)
from pymks import MKSLocalizationModel
from pymks.bases import PrimitiveBasis
from sklearn.grid_search import GridSearchCV
parameters_to_tune = {'n_states': np.arange(2, 11)}
prim_basis = PrimitiveBasis(2, [-1, 1])
model = MKSLocalizationModel(prim_basis)
gs = GridSearchCV(model, parameters_to_tune, cv=5, fit_params={'size': (n, n)})
gs.fit(X_train, y_train)
print(gs.best_estimator_)
print(gs.score(X_test, y_test))
from pymks.tools import draw_gridscores
draw_gridscores(gs.grid_scores_, 'n_states',
score_label='R-squared', param_label='L-Number of Local States')
model = MKSLocalizationModel(basis=PrimitiveBasis(6, [-1, 1]))
model.fit(X, y)
from pymks.tools import draw_coeff
draw_coeff(model.coeff[...,:4])
from pymks.datasets.cahn_hilliard_simulation import CahnHilliardSimulation
np.random.seed(191)
phi0 = np.random.normal(0, 1e-9, (1, n, n))
ch_sim = CahnHilliardSimulation(dt=dt)
phi_sim = phi0.copy()
phi_pred = phi0.copy()
time_steps = 10
for ii in range(time_steps):
ch_sim.run(phi_sim)
phi_sim = ch_sim.response
phi_pred = model.predict(phi_pred)
from pymks.tools import draw_concentrations_compare
draw_concentrations((phi_sim[0], phi_pred[0]), labels=('Simulation', 'MKS'))
m = 3 * n
model.resize_coeff((m, m))
phi0 = np.random.normal(0, 1e-9, (1, m, m))
phi_sim = phi0.copy()
phi_pred = phi0.copy()
for ii in range(1000):
ch_sim.run(phi_sim)
phi_sim = ch_sim.response
phi_pred = model.predict(phi_pred)
from pymks.tools import draw_concentrations_compare
draw_concentrations_compare((phi_sim[0], phi_pred[0]), labels=('Simulation', 'MKS'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (1b) Indentações
Step2: (1c) Funções
Step3: (1d) Tipos Especiais
Step4: (1e) Iteradores
Step5: (1f) Geradores e List Comprehension
Step6: Quando precisamos trabalhar com listas muito grandes, mas sem a necessidade de acessar os elementos aleatóriamente, podemos utilizar os geradores.
Step7: (1g) Arquivos
Step8: Parte 2
Step9: (2b) Multiplicação elemento-a-elemento e produto interno
Step10: (2c) Multiplicação de Matriz
Step11: (2d) Slices
Step12: Parte 3
Step13: As funções lambdas tem restrições em relação a expressão computada. Essa expressão não pode conter print ou incremento +=, por exemplo.
Step14: (3b) Lógica Funcional
Step15: (3c) Funções de Alta Ordem
Step16: Esse tipo de função ajuda a criar um código declarativo, em que o próprio código auto-explica o que está sendo feito
Step20: (3d) Map, Reduce, Filter
Step21: Para reduzir o tamanho do código e facilitar a leitura, podemos compor as funções em sequência
|
<ASSISTANT_TASK:>
Python Code:
x = 10 # x é um inteiro
print type(x)
x = 1.3 # x é um ponto flutuante
print type(x)
x = "Ola" # x é uma string
print type(x)
x = [1, 5, 10] # x é uma lista
print type(x)
x = 10
for i in range(20):
# Início da repetição For
x = x + 1
if x%2 == 0:
# Instrução se condição verdadeira
x = x + 1
else:
# Instrução se a condição for falsa
x = x + 2
# Fim do bloco de repetição For
print x # Isso está fora do for! Boa Ideia!
def Soma(x,y):
return x+y
def Mult(x,y):
return x*y
def SomaMult(x,y):
return x+y, x*y # múltiplas saídas separadas por vírgula
print Soma(10,2), Mult(10,2), SomaMult(10,2)
# O retorno de múltiplas saídas podem ser atribuídas diretamente para múltiplas variáveis
w,z = SomaMult(10,2)
print w, z
lista = [1, 2, True, "palavra"]
tupla = (1, 2, True, "palavra")
lista[1] = 3.0
print lista
tupla[1] = 3.0 # Vai dar erro!
# range(n) gera uma lista de valores entre 0 e n-1
# len(lista) retorna o tamanho de uma lista
def DobraValores(lista):
for i in range(len(lista)):
lista[i] = lista[i]*2
return lista
lista = [1,2,3,4]
lista2 = DobraValores(lista)
print lista, lista2 # As listas são passadas como referência para as funções
dicionario = { "Ana":12, "Joao":13, "Jose":17 } # declaração inicial do dicionário, pode ser {} para dic. vazio
print dicionario["Ana"] # acesso ao elemento pela chave entre colchetes
dicionario["Maria"] = 11 # podemos alterar ou inserir um novo elemento
print dicionario
print "As chaves do dicionário são: ", dicionario.keys()
print "Os valores do dicionário são: ", dicionario.values()
lista = range(10) # gera a lista [0,..,9]
print 8 in lista, 12 in lista
for x in lista:
print x
# Jeito tradicional, mas não otimizado
listaOriginal = [1,2,3,4,5,6,7,8,9]
listaQuadrada = []
for x in listaOriginal:
listaQuadrada.append(x*x)
print listaQuadrada
# Através do List Comprehension
listaQuadrada = [ x*x for x in listaOriginal ]
print listaQuadrada
listaQuadrada = ( x*x for x in listaOriginal )
print listaQuadrada # os elementos ainda não foram calculados
for x in listaQuadrada:
print x # a cada iteração apenas o próximo elemento é calculado, a lista não existe na memória
import os.path
caminho = os.path.join('Data','Aula01') # garante o uso correto de / ou \\ para diretórios
arquivo = os.path.join(caminho,'exemplo.txt')
f = open(arquivo)
for linha in f:
print linha
f.close()
# Como convenção importaremos a biblioteca numpy como np
import numpy as np
# EXERCICIO
# Crie uma array numpy com os valores 1, 2, 3
arraySimples = <COMPLETAR>
# Faça o produto escalar multiplicando a array por 5
vezesCinco = <COMPLETAR>
print arraySimples
print vezesCinco
# TESTE do exercício (2a)
assert np.all(vezesCinco == [5, 10, 15]), 'valor incorreto para vezesCinco'
print "Correto!"
# EXERCICIO
# A função np.arange(inicio,fim,passo) cria uma lista iniciando em inicio, terminando antes do fim seguindo passo
u = np.arange(0, 5, .5) # np.array([0,0.5,1.0,...,4.5])
v = np.arange(5, 10, .5)
elementoAelemento = <COMPLETAR>
prodInterno = <COMPLETAR>
print 'u: {0}'.format(u)
print 'v: {0}'.format(v)
print '\nelementoAelemento\n{0}'.format(elementoAelemento)
print '\nprodInterno\n{0}'.format(prodInterno)
# TESTE do exercício (2b)
assert np.all(elementoAelemento == [ 0., 2.75, 6., 9.75, 14., 18.75, 24., 29.75, 36., 42.75]), "Valores incorretos para elementoAelemento"
print "Primeiro teste OK"
assert prodInterno==183.75, "Valor incorreto para prodInterno"
print "Segundo teste OK"
# EXERCICIO
from numpy.linalg import inv # agora podemos utilizar o comando inv() sem preceder com np.linalg
# Criar uma matriz com listas de listas
A = np.matrix([[1,2,3,4],[5,6,7,8]])
print 'A:\n{0}'.format(A)
# Imprima a matriz transposta
print '\nA transposta:\n{0}'.format(<COMPLETAR>)
# Multiplique A por sua Transposta
AAt = <COMPLETAR>
print '\nAAt:\n{0}'.format(AAt)
# Inverta AAt com o comando inv()
AAtInv = <COMPLETAR>
print '\nAAtInv:\n{0}'.format(AAtInv)
# Mostre que a matriz vezes sua inversa é a identidade
# .round(n) arredonda os valores para n casas decimais
print '\nAAtInv * AAt:\n{0}'.format((<COMPLETAR>).round(4))
# TESTE do exercício (2c)
assert np.all(AAt == np.matrix([[30, 70], [70, 174]])), "Valores incorretos para AAt"
print "Primeiro teste OK"
assert np.allclose(AAtInv, np.matrix([[0.54375, -0.21875], [-0.21875, 0.09375]])), "Valor incorreto para AAtInv"
print "Segundo teste OK"
# EXERCICIO
atributos = np.array([1, 2, 3, 4])
print 'atributos:\n{0}'.format(atributos)
# Crie uma array com os 3 últimos elementos de atributos
ultTres = <COMPLETAR>
print '\nÚltimos três:\n{0}'.format(ultTres)
# TEST do exercício (2d)
assert np.all(ultTres == [2, 3, 4]), "Valores incorretos para ultTres"
print "Teste OK"
# EXERCICIO
# Lembre-se que: "lambda x, y: x + y" cria uma função que adiciona dois valores
mult10 = <COMPLETAR>
print mult10(5)
# Note that the function still shows its name as <lambda>
print '\n', mult10
assert mult10(10)==100, "Função incorreta"
print "Teste OK"
p1 = (1,3)
p2 = (3,7)
euclidiana2D = lambda (x0,y0), (x1,y1): np.sqrt(((x0-x1)**2) + ((y0-y1)**2)) # sqrt é a raíz quadrada
print euclidiana2D(p1,p2)
# Lógica não-funcional
a = 0
def inc():
global a
a = a + 1
# Lógica funcional
def incFn(a):
return a+1
# Função para somar 3 valores
def Soma3(a,b,c):
return a+b+c
# Função que soma apenas dois valores
def Soma2(a,b):
return a+b
# Soma 3 poderia ser criado a partir de Soma2:
Soma3Fn = lambda a,b,c: Soma2(Soma2(a,b),c)
# Cria uma função que calcula a eq. do segundo grau no formato ax^2 + bx + c
def Eq2grau(a,b,c):
def f(x):
return a*x**2 + b*x + c
return f
f = Eq2grau(10,2,1)
print f(10)
# EXERCICIO
# Escreva uma função Soma(x) que retorna uma função que recebe um valor y e soma ao x.
def Soma(x):
<COMPLETAR>
Soma2 = lambda a,b: Soma(a)(b)
Soma3 = lambda a,b,c: Soma(Soma(a)(b))(c)
print Soma2(1,3), Soma3(1,2,3)
assert Soma3(1,2,3)==6, "Erro na função"
print "Ok"
class FuncionalW(object):
def __init__(self, data):
self.data = data
def map(self, function):
Call `map` on the items in `data` using the provided `function`
return FuncionalW(map(function, self.data))
def reduce(self, function):
Call `reduce` on the items in `data` using the provided `function`
return reduce(function, self.data)
def filter(self, function):
Call `filter` on the items in `data` using the provided `function`
return FuncionalW(filter(function, self.data))
def __eq__(self, other):
return (isinstance(other, self.__class__)
and self.__dict__ == other.__dict__)
def __getattr__(self, name): return getattr(self.data, name)
def __getitem__(self, k): return self.data.__getitem__(k)
def __repr__(self): return 'FuncionalW({0})'.format(repr(self.data))
def __str__(self): return 'FuncionalW({0})'.format(str(self.data))
# Exemplo de Map
# Criaremos uma lista
lista = FuncionalW(range(10))
# Criar uma função a ser aplicada nessa lista
f = lambda x: x*x
# Programação Imperativa
resultado1 = FuncionalW([])
for x in lista:
resultado1.append(f(x))
print "Resultado: {}".format(resultado1)
# Funcional
print "Resultado usando Map: {}".format(lista.map(f))
# Exemplo de Reduce
# Criaremos uma lista
lista = FuncionalW(range(1,10))
# Criar uma função a ser aplicada nessa lista
f = lambda x,y: x*y
# Programação Imperativa
produtoria = 1
for x in lista:
produtoria = f(produtoria,x)
print "Resultado: {}".format(produtoria)
# Funcional
print "Resultado usando Reduce: {}".format(lista.reduce(f))
# EXERCICIO
dataset = FuncionalW(range(10))
# Multiplique cada elemento por 5
mapResult = <COMPLETAR>
# Filtre eliminando os elementos ímpares
# No Python "x % 2" é o resultado do resto da divisão de x por 2
filterResult = <COMPLETAR>
# Some os elementos
reduceResult = <COMPLETAR>
print 'mapResult: {0}'.format(mapResult)
print '\nfilterResult: {0}'.format(filterResult)
print '\nreduceResult: {0}'.format(reduceResult)
assert mapResult == FuncionalW([0, 5, 10, 15, 20, 25, 30, 35, 40, 45]),"Valor incorreto para mapResult"
print "Teste 1 OK"
assert filterResult == FuncionalW([0, 2, 4, 6, 8]), "Valor incorreto para filterResult"
print "Teste 2 OK"
assert reduceResult == 45, "Valor incorreto para reduceResult"
print "Teste 3 OK"
dataset = FuncionalW(range(10))
Soma = (dataset
.map(lambda x: x*5)
.filter(lambda x: x%2==0)
.reduce(lambda x,y: x+y)
)
print Soma
# EXERCICIO
# split() divide a string em palavras
Texto = FuncionalW("Esse texto tem varias palavras cada linha tem palavras escritas Esse texto esta escrito".split())
# Vamos fazer uma contagem da palavra 'palavras' no texto
# Crie uma função lambda que recebe duas entradas e retorna se são iguais ou não
Igual = <COMPLETAR>
# Crie uma função lambda que utiliza a função Igual para detectar se a entrada é igual a palavra 'palavras'
DetectaPalavra = <COMPLETAR>
# 1) Filtre as palavras iguais a 'palavras'
# 2) Mapeie todos os elementos para o valor 1
# 3) Reduza para a somatória
contagem = (Texto
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
)
print "Existem {} ocorrências de 'palavras'".format(contagem)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There are the following multiple risk factor valuation classes available
Step2: We assum a positive correlation between the two risk factors.
Step3: Valuation Environment
Step4: valuation_mcs_european_multi
Step5: At instantiation, the respective risk factor model objects are instantiated as well.
Step6: Correlations are stored as well and the resulting corrleation and Cholesky matrices are generated.
Step7: The payoff of a European option is a one-dimensional ndarray object.
Step8: Present value estimations are generated by a call of the present_value method.
Step9: The update method allows updating of certain parameters.
Step10: Let us reset the values to the original parameters.
Step11: When calculating Greeks the risk factor now has to be specified by providing its name.
Step12: Sensitivities Positive Correlation
Step13: Sensitivities Risk Factor 2
Step14: Sensitivities with Negative Correlation
Step15: Sensitivities Risk Factor 1
Step16: Sensitivities Risk Factor 2
Step17: Surfaces for Positive Correlation Case
Step18: Value Surface
Step19: The following estimates for all possible combinations of the initial values---given the assumptions from above---the present value of the European maximum call option.
Step20: The resulting plot then looks as follows. Here, a helper plot function of DX Analytics is used.
Step21: Delta Surfaces
Step22: The plot for the delta surface of the first risk factor.
Step23: And the plot for the delta of the second risk factor.
Step24: Vega Surfaces
Step25: The surface for the first risk factor.
Step26: And the one for the second risk factor.
Step27: Finally, we reset the intial values and the volatilities for the two risk factors.
Step28: valuation_mcs_american_multi
Step29: The present value surface is generated in the same way as before for the European option on the two risk factors. The computational burden is of course much higher for the American option, which are valued by the use of the Least-Squares Monte Carlo approach (LSM) according to Longstaff-Schwartz (2001).
Step30: Delta Surfaces
Step31: Vega Surfaces
Step32: More than Two Risk Factors
Step33: In this case, we need to specify three correlation values.
Step34: The payoff function in this case gets a bit more complex.
Step35: However, the instantiation of the valuation classe remains the same.
Step36: Example Output and Calculations
Step37: The correlation and Cholesky matrices now are of shape 4x4.
Step38: Delta and vega estimates are generated in exactly the same fashion as in the two risk factor case.
Step39: Delta for Jump Diffusion and Stochastic Vol Process
Step40: Vega for Jump Diffusion and Stochastic Vol Process
Step41: American Exercise
Step42: However, another illustration that even such a complex instrument can be handled as elegantly as the most simple one (i.e. European option on single risk factor). Let us compare the present value estimates for both the European and American maximum basket options.
|
<ASSISTANT_TASK:>
Python Code:
from dx import *
import seaborn as sns; sns.set()
import time
t0 = time.time()
r = constant_short_rate('r', 0.06)
me1 = market_environment('me1', dt.datetime(2015, 1, 1))
me2 = market_environment('me2', dt.datetime(2015, 1, 1))
me1.add_constant('initial_value', 36.)
me1.add_constant('volatility', 0.1) # low volatility
me1.add_constant('currency', 'EUR')
me1.add_constant('model', 'gbm')
me2.add_environment(me1)
me2.add_constant('initial_value', 36.)
me2.add_constant('volatility', 0.5) # high volatility
risk_factors = {'gbm1' : me1, 'gbm2' : me2}
correlations = [['gbm1', 'gbm2', 0.5]]
val_env = market_environment('val_env', dt.datetime(2015, 1, 1))
val_env.add_constant('starting_date', val_env.pricing_date)
val_env.add_constant('final_date', dt.datetime(2015, 12, 31))
val_env.add_constant('frequency', 'W')
val_env.add_constant('paths', 5000)
val_env.add_curve('discount_curve', r)
val_env.add_constant('maturity', dt.datetime(2015, 12, 31))
val_env.add_constant('currency', 'EUR')
# European maximum call option
payoff_func = "np.maximum(np.maximum(maturity_value['gbm1'], maturity_value['gbm2']) - 38, 0)"
vc = valuation_mcs_european_multi(
name='European maximum call', # name
val_env=val_env, # valuation environment
risk_factors=risk_factors, # the relevant risk factors
correlations=correlations, # correlations between risk factors
payoff_func=payoff_func) # payoff function
vc.risk_factors
vc.underlying_objects
vc.correlations
vc.correlation_matrix
vc.val_env.get_list('cholesky_matrix')
np.shape(vc.generate_payoff())
vc.present_value()
vc.update('gbm1', initial_value=50.)
vc.present_value()
vc.update('gbm2', volatility=0.6)
vc.present_value()
vc.update('gbm1', initial_value=36., volatility=0.1)
vc.update('gbm2', initial_value=36., volatility=0.5)
vc.delta('gbm2', interval=0.5)
vc.vega('gbm1')
%%time
s_list = np.arange(28., 46.1, 2.)
pv = []; de = []; ve = []
for s in s_list:
vc.update('gbm1', initial_value=s)
pv.append(vc.present_value())
de.append(vc.delta('gbm1', .5))
ve.append(vc.vega('gbm1', 0.2))
vc.update('gbm1', initial_value=36.)
%matplotlib inline
plot_option_stats(s_list, pv, de, ve)
%%time
s_list = np.arange(28., 46.1, 2.)
pv = []; de = []; ve = []
for s in s_list:
vc.update('gbm2', initial_value=s)
pv.append(vc.present_value())
de.append(vc.delta('gbm2', .5))
ve.append(vc.vega('gbm2', 0.2))
plot_option_stats(s_list, pv, de, ve)
correlations = [['gbm1', 'gbm2', -0.9]]
# European maximum call option
payoff_func = "np.maximum(np.maximum(maturity_value['gbm1'], maturity_value['gbm2']) - 38, 0)"
vc = valuation_mcs_european_multi(
name='European maximum call',
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations,
payoff_func=payoff_func)
%%time
s_list = np.arange(28., 46.1, 2.)
pv = []; de = []; ve = []
for s in s_list:
vc.update('gbm1', initial_value=s)
pv.append(vc.present_value())
de.append(vc.delta('gbm1', .5))
ve.append(vc.vega('gbm1', 0.2))
vc.update('gbm1', initial_value=36.)
plot_option_stats(s_list, pv, de, ve)
%%time
s_list = np.arange(28., 46.1, 2.)
pv = []; de = []; ve = []
for s in s_list:
vc.update('gbm2', initial_value=s)
pv.append(vc.present_value())
de.append(vc.delta('gbm2', .5))
ve.append(vc.vega('gbm2', 0.2))
plot_option_stats(s_list, pv, de, ve)
correlations = [['gbm1', 'gbm2', 0.5]]
# European maximum call option
payoff_func = "np.maximum(np.maximum(maturity_value['gbm1'], maturity_value['gbm2']) - 38, 0)"
vc = valuation_mcs_european_multi(
name='European maximum call',
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations,
payoff_func=payoff_func)
asset_1 = np.arange(28., 46.1, 4.) # range of initial values
asset_2 = asset_1
a_1, a_2 = np.meshgrid(asset_1, asset_2)
# two-dimensional grids out of the value vectors
value = np.zeros_like(a_1)
%%time
for i in range(np.shape(value)[0]):
for j in range(np.shape(value)[1]):
vc.update('gbm1', initial_value=a_1[i, j])
vc.update('gbm2', initial_value=a_2[i, j])
value[i, j] = vc.present_value()
plot_greeks_3d([a_1, a_2, value], ['gbm1', 'gbm2', 'present value'])
delta_1 = np.zeros_like(a_1)
delta_2 = np.zeros_like(a_1)
%%time
for i in range(np.shape(delta_1)[0]):
for j in range(np.shape(delta_1)[1]):
vc.update('gbm1', initial_value=a_1[i, j])
vc.update('gbm2', initial_value=a_2[i, j])
delta_1[i, j] = vc.delta('gbm1')
delta_2[i, j] = vc.delta('gbm2')
plot_greeks_3d([a_1, a_2, delta_1], ['gbm1', 'gbm2', 'delta gbm1'])
plot_greeks_3d([a_1, a_2, delta_2], ['gbm1', 'gbm2', 'delta gbm2'])
vega_1 = np.zeros_like(a_1)
vega_2 = np.zeros_like(a_1)
for i in range(np.shape(vega_1)[0]):
for j in range(np.shape(vega_1)[1]):
vc.update('gbm1', initial_value=a_1[i, j])
vc.update('gbm2', initial_value=a_2[i, j])
vega_1[i, j] = vc.vega('gbm1')
vega_2[i, j] = vc.vega('gbm2')
plot_greeks_3d([a_1, a_2, vega_1], ['gbm1', 'gbm2', 'vega gbm1'])
plot_greeks_3d([a_1, a_2, vega_2], ['gbm1', 'gbm2', 'vega gbm2'])
# restore initial values
vc.update('gbm1', initial_value=36., volatility=0.1)
vc.update('gbm2', initial_value=36., volatility=0.5)
# American put payoff
payoff_am = "np.maximum(34 - np.minimum(instrument_values['gbm1'], instrument_values['gbm2']), 0)"
# finer time grid and more paths
val_env.add_constant('frequency', 'B')
val_env.add_curve('time_grid', None)
# delete existing time grid information
val_env.add_constant('paths', 5000)
# American put option on minimum of two assets
vca = valuation_mcs_american_multi(
name='American minimum put',
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations,
payoff_func=payoff_am)
vca.present_value()
for key, obj in vca.instrument_values.items():
print np.shape(vca.instrument_values[key])
asset_1 = np.arange(28., 44.1, 4.)
asset_2 = asset_1
a_1, a_2 = np.meshgrid(asset_1, asset_2)
value = np.zeros_like(a_1)
%%time
for i in range(np.shape(value)[0]):
for j in range(np.shape(value)[1]):
vca.update('gbm1', initial_value=a_1[i, j])
vca.update('gbm2', initial_value=a_2[i, j])
value[i, j] = vca.present_value()
plot_greeks_3d([a_1, a_2, value], ['gbm1', 'gbm2', 'present value'])
delta_1 = np.zeros_like(a_1)
delta_2 = np.zeros_like(a_1)
%%time
for i in range(np.shape(delta_1)[0]):
for j in range(np.shape(delta_1)[1]):
vca.update('gbm1', initial_value=a_1[i, j])
vca.update('gbm2', initial_value=a_2[i, j])
delta_1[i, j] = vca.delta('gbm1')
delta_2[i, j] = vca.delta('gbm2')
plot_greeks_3d([a_1, a_2, delta_1], ['gbm1', 'gbm2', 'delta gbm1'])
plot_greeks_3d([a_1, a_2, delta_2], ['gbm1', 'gbm2', 'delta gbm2'])
vega_1 = np.zeros_like(a_1)
vega_2 = np.zeros_like(a_1)
%%time
for i in range(np.shape(vega_1)[0]):
for j in range(np.shape(vega_1)[1]):
vca.update('gbm1', initial_value=a_1[i, j])
vca.update('gbm2', initial_value=a_2[i, j])
vega_1[i, j] = vca.vega('gbm1')
vega_2[i, j] = vca.vega('gbm2')
plot_greeks_3d([a_1, a_2, vega_1], ['gbm1', 'gbm2', 'vega gbm1'])
plot_greeks_3d([a_1, a_2, vega_2], ['gbm1', 'gbm2', 'vega gbm2'])
me3 = market_environment('me3', dt.datetime(2015, 1, 1))
me4 = market_environment('me4', dt.datetime(2015, 1, 1))
me3.add_environment(me1)
me4.add_environment(me1)
# for jump-diffusion
me3.add_constant('lambda', 0.5)
me3.add_constant('mu', -0.6)
me3.add_constant('delta', 0.1)
me3.add_constant('model', 'jd')
# for stoch volatility model
me4.add_constant('kappa', 2.0)
me4.add_constant('theta', 0.3)
me4.add_constant('vol_vol', 0.2)
me4.add_constant('rho', -0.75)
me4.add_constant('model', 'sv')
val_env.add_constant('paths', 10000)
val_env.add_constant('frequency', 'W')
val_env.add_curve('time_grid', None)
risk_factors = {'gbm1' : me1, 'gbm2' : me2, 'jd' : me3, 'sv' : me4}
correlations = [['gbm1', 'gbm2', 0.5], ['gbm2', 'jd', -0.5], ['gbm1', 'sv', 0.7]]
# European maximum call payoff
payoff_1 = "np.maximum(np.maximum(np.maximum(maturity_value['gbm1'], maturity_value['gbm2']),"
payoff_2 = " np.maximum(maturity_value['jd'], maturity_value['sv'])) - 40, 0)"
payoff = payoff_1 + payoff_2
payoff
vc = valuation_mcs_european_multi(
name='European maximum call',
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations,
payoff_func=payoff)
vc.risk_factors
vc.underlying_objects
vc.present_value()
vc.correlation_matrix
vc.val_env.get_list('cholesky_matrix')
vc.delta('jd', interval=0.1)
vc.delta('sv')
vc.vega('jd')
vc.vega('sv')
delta_1 = np.zeros_like(a_1)
delta_2 = np.zeros_like(a_1)
%%time
for i in range(np.shape(delta_1)[0]):
for j in range(np.shape(delta_1)[1]):
vc.update('jd', initial_value=a_1[i, j])
vc.update('sv', initial_value=a_2[i, j])
delta_1[i, j] = vc.delta('jd')
delta_2[i, j] = vc.delta('sv')
plot_greeks_3d([a_1, a_2, delta_1], ['jump diffusion', 'stochastic vol', 'delta jd'])
plot_greeks_3d([a_1, a_2, delta_2], ['jump diffusion', 'stochastic vol', 'delta sv'])
vega_1 = np.zeros_like(a_1)
vega_2 = np.zeros_like(a_1)
%%time
for i in range(np.shape(vega_1)[0]):
for j in range(np.shape(vega_1)[1]):
vc.update('jd', initial_value=a_1[i, j])
vc.update('sv', initial_value=a_2[i, j])
vega_1[i, j] = vc.vega('jd')
vega_2[i, j] = vc.vega('sv')
plot_greeks_3d([a_1, a_2, vega_1], ['jump diffusion', 'stochastic vol', 'vega jd'])
plot_greeks_3d([a_1, a_2, vega_2], ['jump diffusion', 'stochastic vol', 'vega sv'])
# payoff of American minimum put option
payoff_am_1 = "np.maximum(40 - np.minimum(np.minimum(instrument_values['gbm1'], instrument_values['gbm2']),"
payoff_am_2 = "np.minimum(instrument_values['jd'], instrument_values['sv'])), 0)"
payoff_am = payoff_am_1 + payoff_am_2
vca = valuation_mcs_american_multi(
name='American minimum put',
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations,
payoff_func=payoff_am)
# restore initial values
vc.update('jd', initial_value=36., volatility=0.1)
vc.update('sv', initial_value=36., volatility=0.1)
%time vc.present_value()
%time vca.present_value()
%time vca.delta('gbm1')
%time vca.delta('gbm2')
%time vca.vega('jd')
%time vca.vega('sv')
print "Duration for whole notebook %.2f in min" % ((time.time() - t0) / 60)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step2: Create priors for weights (Spec classes)
Step3: Spec behaves like a tensor and has the same methods
Step4: Methods are used as usually, but instead of a tensor we get another Spec instance.
Step5: These operations are dalayed until user calls expr(shape). When that happens this shape is passed to all defined specs in expression tree. This tree is evaluated and pymc3 variables are created, one per spec instance and shape for it (there exsist some corner cases when one spec instance can get 2 different shapes).
Step6: Note that in example above we specified one variable without shape and other .with_shape(()). So when expression is evaluated this custom shape replaces the shape that was provided in __call__.
Step7: More complex cases can require more accurate shape handling. This can be achived with tags.
Step8: Now if we try to call the expression without tags or we will get KeyError. Unspecified shape is default tag.
Step9: Sometimes it is usefull to change shape with a function, it maps shape to shape
Step10: If you need some more complex transformation other than builtin tensor operations you can use a simple wrapper over a function
Step11: Determinant will be taken only after __call__
Step12: Disclamer
Step13: Network
Step14: Yes, Gelato layers are Models and Layers at the same time. Moreover context is taken from the previous layers so that you no more need with model
Step15: We now minimize the following objective
Step16: Seems something went wrong
Step17: Inference
Step18: Important to note
Step19: Making predictions
Step20: Seems like MAP estimation may be not the best way to predict. On the other hand it is much faster
Step21: Uncertainty
Step22: Wow, variance seems to be much higher for pictures where error was done
Step23: Seems like our model is not always confident in predictions and is aware of it. Let's check how often it happens so
Step24: In 37% cases our model is not confident enough and it leeds to mistake.
Step25: Good news, It happens pretty rare, exactly 4 times.
Step26: I would rather agree with my network sometimes
|
<ASSISTANT_TASK:>
Python Code:
%env THEANO_FLAGS=device=cuda0
import matplotlib.pyplot as plt
%matplotlib inline
import gelato
import theano
import theano.tensor as tt
theano.config.warn_float64 = 'warn'
import numpy as np
import lasagne
import pymc3 as pm
from sklearn.datasets import fetch_mldata
from sklearn.model_selection import train_test_split
def load_dataset():
# We first define a download function, supporting both Python 2 and 3.
mnist = fetch_mldata('MNIST original')
data = mnist['data'].reshape((70000, 1, 28, 28))
target = mnist['target']
# We can now download and read the training and test set images and labels.
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=10000)
# We reserve the last 10000 training examples for validation.
X_train, X_val = X_train[:-10000], X_train[-10000:]
y_train, y_val = y_train[:-10000], y_train[-10000:]
# We just return all the arrays in order, as expected in main().
# (It doesn't matter how we do this as long as we can read them again.)
return X_train, y_train, X_val, y_val, X_test, y_test
print("Loading data...")
X_train, y_train, X_val, y_val, X_test, y_test = load_dataset()
total_size = X_train.shape[0]
from gelato.specs import NormalSpec
expr = NormalSpec() * NormalSpec().with_shape(())
expr
dir(expr)[70:80]
expr.argmin()
with pm.Model() as model:
expr((100, 2))
model.vars
list(map(lambda v: v.dshape, model.vars))
expr = NormalSpec().with_tag('one') * NormalSpec().with_tag('two') + NormalSpec()
with pm.Model() as model:
expr(dict(one=(100, 2), two=(), default=(100, 1)))
list(map(lambda v: v.dshape, model.vars))
expr = NormalSpec().with_shape(lambda s: (1,) * len(s)) * NormalSpec()
with pm.Model() as model:
expr((10, 10))
list(map(lambda v: v.dshape, model.vars))
from gelato.specs import as_spec_op
spec_det = as_spec_op(theano.tensor.nlinalg.det)
spec_det
spec_det(expr)
from gelato.specs import NormalSpec, LognormalSpec, set_default_spec
# Now every layer without passed specs will use `NormalSpec(sd=hyper)` as a prior
hyper = LognormalSpec(sd=10).with_shape(())
set_default_spec(NormalSpec(sd=hyper))
print(issubclass(gelato.layers.InputLayer, pm.Model) and
issubclass(gelato.layers.Conv2DLayer, pm.Model))
print(issubclass(gelato.layers.InputLayer, lasagne.layers.Layer) and
issubclass(gelato.layers.Conv2DLayer, lasagne.layers.Layer))
input_var = pm.Minibatch(X_train, 100, dtype='float32')
target_var = pm.Minibatch(y_train, 100, dtype='int32')
network = gelato.layers.InputLayer(shape=(None, 1, 28, 28),
input_var=input_var)
network = gelato.layers.Conv2DLayer(
network, num_filters=32, filter_size=(5, 5),
nonlinearity=lasagne.nonlinearities.tanh)
# Max-pooling layer of factor 2 in both dimensions:
network = gelato.layers.MaxPool2DLayer(network, pool_size=(2, 2))
# Another convolution with 32 5x5 kernels, and another 2x2 pooling:
network = gelato.layers.Conv2DLayer(
network, num_filters=32, filter_size=(5, 5),
nonlinearity=lasagne.nonlinearities.tanh)
network = gelato.layers.MaxPool2DLayer(network,
pool_size=(2, 2))
n_hid2 = 176
network = gelato.layers.DenseLayer(
network, num_units=n_hid2,
nonlinearity=lasagne.nonlinearities.tanh,
)
n_hid3 = 64
network = gelato.layers.DenseLayer(
network, num_units=n_hid3,
nonlinearity=lasagne.nonlinearities.tanh,
)
# Finally, we'll add the fully-connected output layer, of 10 softmax units:
network = gelato.layers.DenseLayer(
network, num_units=10,
nonlinearity=lasagne.nonlinearities.softmax,
)
prediction = gelato.layers.get_output(network)
# Gelato layers are designed to simlify the whole process
# and are pm.Models as well as lasagne.layer.Layer
# You can work in the contest of the last defined layer to define likelihood
with network:
likelihood = pm.Categorical('out',
prediction,
observed=target_var,
total_size=total_size)
network.vars
network.root.vars
# I prefer object oriented style for doing inference in PyMC3 it allows more flexibility
# Remember that we need root model here
with network.root:
advi = pm.ADVI(scale_cost_to_minibatch=False)
advi.fit(80000, obj_optimizer=pm.adam(learning_rate=1e-3))
#import pickle
#params = advi.approx.shared_params
#pickle.dump(params, open('params.pymc3','wb'))
#advi.approx.shared_params = pickle.load(open('params.pymc3','rb'))
plt.figure(figsize=(16,9))
plt.plot(advi.hist, alpha=.3);
from theano.configparser import change_flags
# The right way to compile a function without changing important pymc3 flag `compute_test_value='raise'`
with change_flags(compute_test_value='ignore'):
# create symbolic input image
inpimg = tt.tensor4('input')
# number of samples for posterior predictive distribution
it = tt.iscalar('i')
# posterior predictive probability
_prediction = gelato.layers.get_output(network)
# then replacements follow
prediction = advi.approx.apply_replacements(
_prediction, deterministic=True,
# you can replace minibatch tensor with symbolic input
more_replacements={input_var:inpimg})
predictions = advi.approx.sample_node(
_prediction, it,
more_replacements={input_var:inpimg})
# That is it, finally we compile both functions
predictions_f = theano.function([inpimg, theano.In(it, 's', 10)], predictions)
prediction_f = theano.function([inpimg], prediction)
from scipy.stats import mode
y_pred_MAP = np.argmax(prediction_f(X_test), axis=1)
error_under_MAP = y_pred_MAP != y_test
error_rate_under_MAP = error_under_MAP.mean()
# distribution for probabilistic predictions
y_preds_posterior = predictions_f(X_test, 100)
# take distribution for modes
# than integrate out mode for modes
y_pred_posterior = mode(np.argmax(y_preds_posterior, axis=-1), axis=0).mode[0]
error_under_posterior = y_pred_posterior != y_test
error_rate_under_posterior = error_under_posterior.mean()
print('MAP : %f' % error_rate_under_MAP)
print('predictive posterior mode: %f' % error_rate_under_posterior)
def check_the_error_at(idx):
print('true:', y_test[error_under_posterior][idx],'prediction:', y_pred_posterior[error_under_posterior][idx])
plt.gray();plt.matshow(X_test[error_under_posterior][idx][0]);plt.show();
check_the_error_at(0)
check_the_error_at(1)
plt.plot(y_preds_posterior.var(0)[error_under_posterior].mean(0), label='errors')
plt.plot(y_preds_posterior.var(0)[~error_under_posterior].mean(0), label='correct')
plt.plot(y_preds_posterior.var(0).mean(0), label='all')
plt.legend()
y_preds_labels = np.argmax(y_preds_posterior, axis=-1)
prediction_expected_error_rate = (y_preds_labels != y_pred_posterior).mean(0)
plt.hist(prediction_expected_error_rate, bins=20)
plt.title('Expected Error Rate');
plt.xlabel('E[error rate]')
plt.ylabel('#observations')
((prediction_expected_error_rate != 0) == error_under_posterior).mean()
(prediction_expected_error_rate[error_under_posterior] == 0).mean()
(prediction_expected_error_rate[error_under_posterior] == 0).sum()
houston_we_have_a_problem = prediction_expected_error_rate[error_under_posterior] == 0
def problem_display():
for i in range(houston_we_have_a_problem.sum()):
print('true:', y_test[error_under_posterior][houston_we_have_a_problem][i],
'prediction:', y_pred_posterior[error_under_posterior][houston_we_have_a_problem][i])
plt.gray();plt.matshow(X_test[error_under_posterior][houston_we_have_a_problem][i][0]);plt.show();
problem_display()
top_three = np.argsort(prediction_expected_error_rate)[-3:][::-1]
top_three
def low_confidence_examples_display():
for i in top_three:
print('true:', y_test[i],
'prediction:', y_pred_posterior[i],
'expected error rate:', prediction_expected_error_rate[i]
)
plt.gray();plt.matshow(X_test[i][0]);plt.show();
low_confidence_examples_display()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quaternions - Pauli matrices
Step2: PHYSICS
|
<ASSISTANT_TASK:>
Python Code:
from sympy import *
variables = (x, y, z, w) = symbols('x y z w', real=True)
print(variables)
metric=[ 1
,1
,1
,1]
myBasis='e_1 e_2 e_3 e_4'
sp4d = Ga(myBasis, g=metric, coords=variables,norm=True)
(e_1, e_2, e_3, e_4) = sp4d.mv()
sigma_1w=e_2*e_3
sigma_2w=e_3*e_1
sigma_3w=-e_1*e_2
sigma_1w.texLabel='\\sigma_{1w}'
sigma_2w.texLabel='\\sigma_{2w}'
sigma_3w.texLabel='\\sigma_{3w}'
display(Math('(1,'+sigma_1w.texLabel+','+sigma_2w.texLabel+','+sigma_3w.texLabel+')'))
CheckProperties(sigma_1w,sigma_2w,sigma_3w)
sigma_1x=e_2*e_3
sigma_2x=e_3*e_4
sigma_3x=-e_4*e_2
sigma_1x.texLabel='\\sigma_{1x}'
sigma_2x.texLabel='\\sigma_{2x}'
sigma_3x.texLabel='\\sigma_{3x}'
display(Math('(1,'+sigma_1x.texLabel+','+sigma_2x.texLabel+','+sigma_3x.texLabel+')'))
CheckProperties(sigma_1x,sigma_2x,sigma_3x)
sigma_1y=e_1*e_3
sigma_2y=e_3*e_4
sigma_3y=-e_4*e_1
sigma_1y.texLabel='\\sigma_{1y}'
sigma_2y.texLabel='\\sigma_{2y}'
sigma_3y.texLabel='\\sigma_{3y}'
display(Math('(1,'+sigma_1y.texLabel+','+sigma_2y.texLabel+','+sigma_3y.texLabel+')'))
CheckProperties(sigma_1y,sigma_2y,sigma_3y)
sigma_1z=e_1*e_2
sigma_2z=e_2*e_4
sigma_3z=-e_4*e_1
sigma_1z.texLabel='\\sigma_{1z}'
sigma_2z.texLabel='\\sigma_{2z}'
sigma_3z.texLabel='\\sigma_{3z}'
display(Math('(1,'+sigma_1z.texLabel+','+sigma_2z.texLabel+','+sigma_3z.texLabel+')'))
CheckProperties(sigma_1z,sigma_2z,sigma_3z)
gamma_4=e_1*e_2*e_3
gamma_1=e_4*e_2*e_3
gamma_2=e_1*e_4*e_3
gamma_3=e_1*e_2*e_4
displayWithTitle(gamma_4,title='\\gamma_4')
displayWithTitle(gamma_4*gamma_4,title='\\gamma_4^2')
displayWithTitle(gamma_1,title='\\gamma_1')
displayWithTitle(gamma_1*gamma_1,title='\\gamma_1^2')
displayWithTitle(gamma_2,title='\\gamma_2')
displayWithTitle(gamma_2*gamma_2,title='\\gamma_2^2')
displayWithTitle(gamma_3,title='\\gamma_3')
displayWithTitle(gamma_3*gamma_3,title='\\gamma_3^2')
m, q = symbols('m q', real=True)
rquat = [sigma_1w, sigma_2w, sigma_3w]
B_x, B_y, B_z = symbols('B_x B_y B_z', real=True)
Bv =[B_x, B_y, B_z]
B = S(0)
for (dim, var) in zip(Bv, rquat):
B += var * dim
B.texLabel='\\mathbf{B}'
display(Latex('Magnetic field $'+B.texLabel+'$ is defined with $B_x, B_y, B_z \\in \\mathbb{R}$'))
display(Math(B.texLabel+'=B_x'+sigma_1w.texLabel+'+B_y'+sigma_2w.texLabel+'+B_z'+sigma_3w.texLabel))
displayWithTitle(B, title=B.texLabel)
a, b, c, d = symbols('a b c d', real=True)
psiv =[b, c, d]
psi = S(a)
for (dim, var) in zip(psiv, rquat):
psi += var * dim
psi.texLabel='\\psi'
display(Latex('Pauli spinor $'+psi.texLabel+'$ is defined with $a, b, c, d\\in \\mathbb{R}$'))
display(Math(psi.texLabel+'=a+b'+sigma_1w.texLabel+'+c'+sigma_2w.texLabel+'+d'+sigma_3w.texLabel))
displayWithTitle(psi, title=psi.texLabel)
psiv_dagger =[-b, -c, -d]
psi_dagger = S(a)
for (dim, var) in zip(psiv_dagger, rquat):
psi_dagger += var * dim
psi_dagger.texLabel='\\psi^\\dagger'
display(Math(psi_dagger.texLabel+'=a-b'+sigma_1w.texLabel+'-c'+sigma_2w.texLabel+'-d'+sigma_3w.texLabel))
displayWithTitle(psi_dagger, title=psi_dagger.texLabel)
rho=psi*psi_dagger
rho.texLabel = '\\rho'
display(Math(rho.texLabel+'='+psi.texLabel+psi_dagger.texLabel))
displayWithTitle(rho, title=rho.texLabel)
spin_vector_w = psi*sigma_3w*psi_dagger
spin_vector_w.texLabel = '\\boldsymbol{s}_w'
display(Math(spin_vector_w.texLabel+'='+psi.texLabel+sigma_3w.texLabel+psi_dagger.texLabel))
displayWithTitle(spin_vector_w, title=spin_vector_w.texLabel)
up=1/2*(1+e_3)
up.texLabel='\\uparrow'
displayWithTitle(up, title=up.texLabel)
displayWithTitle(up*up, title=up.texLabel+'^2')
down=1/2*(1-e_3)
down.texLabel='\\downarrow'
displayWithTitle(down, title=down.texLabel)
displayWithTitle(down*down, title=down.texLabel+'^2')
gamma_5 = e_1*e_2*e_3*e_4
gamma_5.texLabel = '\\gamma_5'
left=1/2*(1-gamma_5)
left.texLabel='['
displayWithTitle(left, title=left.texLabel)
displayWithTitle(left*left, title=left.texLabel+'^2')
right=1/2*(1+gamma_5)
right.texLabel=']'
displayWithTitle(right, title=right.texLabel)
displayWithTitle(right*right, title=right.texLabel+'^2')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Market Environment and Portfolio Object
Step2: Using pandas under the hood, the class retrieves historial stock price data from either Yahoo! Finance of Google.
Step3: Basic Statistics
Step4: Given these weights you can calculate the portfolio return via the method get_portfolio_return.
Step5: Analogously, you can call get_portfolio_variance to get the historical portfolio variance.
Step6: The class also has a neatly printable string representation.
Step7: Setting Weights
Step8: You cal also easily check results for different weights with changing the attribute values of an object.
Step9: Let us implement a Monte Carlo simulation over potential portfolio weights.
Step10: And the simulation results visualized.
Step11: Optimizing Portfolio Composition
Step12: Instead of maximizing the portfolio return without any constraints, you can also set a (sensible/possible) maximum target volatility level as a constraint. Both, in an exact sense ("equality constraint") ...
Step13: ... or just a an upper bound ("inequality constraint").
Step14: Risk
Step15: And, as before, to set constraints (in this case) for the target return level.
Step16: Sharpe Ratio
Step17: Efficient Frontier
Step18: The plot with the random and efficient portfolios.
Step19: Capital Market Line
Step20: The following plot illustrates that the capital market line has an ordinate value equal to the risk-free rate (the safe return of the money market account) and is tangent to the efficient frontier.
Step21: Portfolio return and risk of the efficient portfolio used are
Step22: The portfolio composition can be derived as follows.
Step23: Or also in this way.
Step24: Dow Jones Industrial Average
Step25: Data retrieval in this case takes a bit.
Step26: Given the larger data set now used, efficient frontier ...
Step27: ... and capital market line derivations take also longer.
|
<ASSISTANT_TASK:>
Python Code:
from dx import *
import seaborn as sns; sns.set()
ma = market_environment('ma', dt.date(2010, 1, 1))
ma.add_list('symbols', ['AAPL', 'GOOG', 'MSFT', 'FB'])
ma.add_constant('source', 'google')
ma.add_constant('final date', dt.date(2014, 3, 1))
%%time
port = mean_variance_portfolio('am_tech_stocks', ma)
# instantiates the portfolio class
# and retrieves all the time series data needed
port.get_weights()
# defaults to equal weights
port.get_portfolio_return()
# expected (= historical mean) return
port.get_portfolio_variance()
# expected (= historical) variance
print port
# ret. con. is "return contribution"
# given the mean return and the weight
# of the security
port.set_weights([0.6, 0.2, 0.1, 0.1])
print port
port.test_weights([0.6, 0.2, 0.1, 0.1])
# returns av. return + vol + Sharp ratio
# without setting new weights
# Monte Carlo simulation of portfolio compositions
rets = []
vols = []
for w in range(500):
weights = np.random.random(4)
weights /= sum(weights)
r, v, sr = port.test_weights(weights)
rets.append(r)
vols.append(v)
rets = np.array(rets)
vols = np.array(vols)
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10, 6))
plt.scatter(vols, rets, c=rets / vols, marker='o')
plt.xlabel('expected volatility')
plt.ylabel('expected return')
plt.colorbar(label='Sharpe ratio')
port.optimize('Return')
# maximizes expected return of portfolio
# no volatility constraint
print port
port.optimize('Return', constraint=0.225, constraint_type='Exact')
# interpretes volatility constraint as equality
print port
port.optimize('Return', constraint=0.4, constraint_type='Bound')
# interpretes volatility constraint as inequality (upper bound)
print port
port.optimize('Vol')
# minimizes expected volatility of portfolio
# no return constraint
print port
port.optimize('Vol', constraint=0.175, constraint_type='Exact')
# interpretes return constraint as equality
print port
port.optimize('Vol', constraint=0.20, constraint_type='Bound')
# interpretes return constraint as inequality (upper bound)
print port
port.optimize('Sharpe')
# maximize Sharpe ratio
print port
%%time
evols, erets = port.get_efficient_frontier(100)
# 100 points of the effient frontier
plt.figure(figsize=(10, 6))
plt.scatter(vols, rets, c=rets / vols, marker='o')
plt.scatter(evols, erets, c=erets / evols, marker='x')
plt.xlabel('expected volatility')
plt.ylabel('expected return')
plt.colorbar(label='Sharpe ratio')
%%time
cml, optv, optr = port.get_capital_market_line(riskless_asset=0.05)
# capital market line for effiecient frontier and risk-less short rate
cml # lambda function for capital market line
plt.figure(figsize=(10, 6))
plt.plot(evols, erets, lw=2.0, label='efficient frontier')
plt.plot((0, 0.4), (cml(0), cml(0.4)), lw=2.0, label='capital market line')
plt.plot(optv, optr, 'r*', markersize=10, label='optimal portfolio')
plt.legend(loc=0)
plt.ylim(0)
plt.xlabel('expected volatility')
plt.ylabel('expected return')
optr
optv
port.optimize('Vol', constraint=optr, constraint_type='Exact')
print port
port.optimize('Return', constraint=optv, constraint_type='Exact')
print port
symbols = ['AXP', 'BA', 'CAT', 'CSCO', 'CVX', 'DD', 'DIS', 'GE',
'GS', 'HD', 'IBM', 'INTC', 'JNJ', 'JPM', 'KO', 'MCD', 'MMM',
'MRK', 'MSFT', 'NKE', 'PFE', 'PG', 'T', 'TRV', 'UNH', 'UTX',
'V', 'VZ','WMT', 'XOM']
# all DJIA 30 symbols
ma = market_environment('ma', dt.date(2010, 1, 1))
ma.add_list('symbols', symbols)
ma.add_constant('source', 'google')
ma.add_constant('final date', dt.date(2014, 3, 1))
%%time
djia = mean_variance_portfolio('djia', ma)
# defining the portfolio and retrieving the data
%%time
djia.optimize('Vol')
print djia.variance, djia.variance ** 0.5
# minimium variance & volatility in decimals
%%time
evols, erets = djia.get_efficient_frontier(25)
# efficient frontier of DJIA
%%time
cml, optv, optr = djia.get_capital_market_line(riskless_asset=0.01)
# capital market line and optimal (tangent) portfolio
plt.figure(figsize=(10, 6))
plt.plot(evols, erets, lw=2.0, label='efficient frontier')
plt.plot((0, 0.4), (cml(0), cml(0.4)), lw=2.0, label='capital market line')
plt.plot(optv, optr, 'r*', markersize=10, label='optimal portfolio')
plt.legend(loc=0)
plt.ylim(0)
plt.xlabel('expected volatility')
plt.ylabel('expected return')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build a Multilayer Convolutional Network
Step2: Convolution and Pooling
Step3: First Convolutional Layer
Step4: To apply the layer, we first reshape x to a 4d tensor, with the second and third dimensions corresponding to
Step5: We then convolve x_image with the weight tensor, add the bias, apply the ReLU function, and finally max pool. The max_pool_2x2 method will reduce the image size to 14x14.
Step6: Second Convolutional Layer
Step7: Densely Connected Layer
Step8: Dropout
Step9: Readout Layer
Step10: Train and Evaluate the Model
|
<ASSISTANT_TASK:>
Python Code:
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
mnist = input_data.read_data_sets('MNIST_data', one_hot = True)
################## build a softmax regression model
# input data
x = tf.placeholder(tf.float32, shape = [None, 784])
# real labels
y_ = tf.placeholder(tf.float32, shape = [None, 10])
# variables(or weights)
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
# training strategy(loss function, what type of gradident etc...)
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# start session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for _ in range(1000):
batch = mnist.train.next_batch(100)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
# evaluate the model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev = 0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding = 'SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1, 28, 28, 1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
sess1 = tf.Session()
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess1.run(tf.global_variables_initializer())
for i in range(20000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print("test accuracy %g"% accuracy.eval(session=sess1, feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For the parameters chosen above, the long-term (asymptotic) solution is a fixed point. Let's see this in the phase space, that is, the space of Predators ($P$) vs. Prey ($V$). We note that the arrows are "circulating", but always point inwards, and so the trajectory moves toward the middle, to the fixed point.
Step2: Messing a little with the parameters...
Step3: And, looking again at the phase space plot, we now see that the flux (the arrows) inside circles outwards, towards a limit cycle, and the arrows outside points inwards. The limit cycle corresponds to the periodic solution we just saw.
Step4: The bifurcation diagram
Step5: This is a very poor bifurcation diagram
Step6: Well, the first prediction was OK (notice that the plot above uses a log scale), but for high $K$, the minima of the oscillation go to very low values, so that the populations have a high risk of extinction. This phenomenon is the so-called paradox of enrichment.
Step7: Notice that, even with small $K$, the solutions oscilate due to the oscilation of $r(t)$.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from numpy import *
from scipy.integrate import odeint
from matplotlib.pyplot import *
ion()
def RM(y, t, r, K, a, h, e, d):
return array([ y[0] * ( r*(1-y[0]/K) - a*y[1]/(1+a*h*y[0]) ),
y[1] * (e*a*y[0]/(1+a*h*y[0]) - d) ])
t = arange(0, 1000, .1)
y0 = [1, 1.]
pars = (1., 10, 1., 0.1, 0.1, 0.1)
y = odeint(RM, y0, t, pars)
plot(t, y)
xlabel('time')
ylabel('population')
legend(['resource', 'consumer'])
# plot the solution in the phase space
plot(y[:,0], y[:,1])
# defines a grid of points
R, C = meshgrid(arange(0.95, 1.25, .05), arange(0.95, 1.04, 0.01))
# calculates the value of the derivative at the point in the grid
dy = RM(array([R, C]), 0, *pars)
# plots arrows on the points of the grid, with the difection
# and length determined by the derivative dy
# This is a picture of the flow of the solution in the phase space
quiver(R, C, dy[0,:], dy[1,:], scale_units='xy', angles='xy')
xlabel('Resource')
ylabel('Consumer')
# now K = 15
t = arange(0, 1000, .1)
pars = (1., 15., 1., 0.1, 0.1, 0.1)
y_osc = odeint(RM, y0, t, pars)
plot(t, y_osc)
xlabel('time')
ylabel('population')
legend(['resource', 'consumer'])
plot(y_osc[:,0], y_osc[:,1])
R, C = meshgrid(arange(0, 6., .4), arange(0, 2.1, 0.2))
dy = RM(array([R, C]), 0, *pars)
quiver(R, C, dy[0,:], dy[1,:], scale_units='xy', angles='xy')
xlabel('R')
ylabel('C')
plot(10., y[-500:,0].min(), 'og')
plot(10., y[-500:,0].max(), 'og')
plot(10., y[-500:,1].min(), 'ob')
plot(10., y[-500:,1].max(), 'ob')
plot(15., y_osc[-500:,0].min(), 'og')
plot(15., y_osc[-500:,0].max(), 'og')
plot(15., y_osc[-500:,1].min(), 'ob')
plot(15., y_osc[-500:,1].max(), 'ob')
xlim((0, 20))
yscale('log')
xlabel('K')
ylabel('min / max population')
## this block calculates solutions for many K's, it should take some time
# empty lists to append the values later
ymin = []
ymax = []
KK = arange(.5, 25, .5)
t = arange(0, 6000, 1.)
# loop over the values of K (KK)
for K in KK:
# redefine the parameters using the new K
pars = (1., K, 1., 0.1, 0.1, 0.1)
# integrate again the equation, with new parameters
# strictly speaking, we should start each simulation with
# the state obtained in the last iteration;
# y0 = y[-1,;]
y = odeint(RM, y0, t, pars)
# calculate the minimum and maximum of the populations, but
# only for the last 1000 steps (the long-term solution),
# appending the result to the list
# question: is 1000 enough? When it wouldn't be?
ymin.append(y[-1000:,:].min(axis=0))
ymax.append(y[-1000:,:].max(axis=0))
# convert the lists into arrays
ymin = array(ymin)
ymax = array(ymax)
# and now, we plot the bifurcation diagram
plot(KK, ymin[:,0], 'g', label='resource')
plot(KK, ymax[:,0], 'g')
plot(KK, ymin[:,1], 'b', label='consumer')
plot(KK, ymax[:,1], 'b')
xlabel('$K$')
ylabel('min/max populations')
legend(loc='best')
# use a log scale in the y-axis
yscale('linear')
def RM_season(y, t, r, alpha, T, K, a, h, e, d):
# in this function, `t` appears explicitly
return array([ y[0] * ( r * (1+alpha*sin(2*pi*t/T)) *
(1-y[0]/K) - a*y[1]/(1+a*h*y[0]) ),
y[1] * (e*a*y[0]/(1+a*h*y[0]) - d) ])
t = arange(0, 2000, 1.)
y0 = [1., 1.]
pars = (1., 0.1, 80., 10., 1., 0.1, 0.1, 0.1)
y = odeint(RM_season, y0, t, pars)
plot(t, y)
xlabel('time')
ylabel('population')
legend(['resource', 'consumer'])
ymin = []
ymax = []
t = arange(0, 6000, 1.) # times
TT = arange(1, 80, 2) # periods
for T in TT:
pars = (1., 0.1, T, 10., 1., 0.1, 0.1, 0.1)
y = odeint(RM_season, y0, t, pars)
ymin.append(y[-1000:,:].min(axis=0))
ymax.append(y[-1000:,:].max(axis=0))
ymin = array(ymin)
ymax = array(ymax)
plot(TT, ymin[:,0], 'g', label='resource')
plot(TT, ymax[:,0], 'g')
plot(TT, ymin[:,1], 'b', label='consumer')
plot(TT, ymax[:,1], 'b')
xlabel('$T$')
ylabel('min/max populations')
legend(loc='best')
yscale('log')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Preparation
Step2: Build Model
Step3: GridSearch HyperParameters
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import np_utils
from keras.wrappers.scikit_learn import KerasClassifier
from keras import backend as K
from sklearn.model_selection import GridSearchCV
nb_classes = 10
# input image dimensions
img_rows, img_cols = 28, 28
# load training data and do basic data normalization
(X_train, y_train), (X_test, y_test) = mnist.load_data()
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
# convert class vectors to binary class matrices
y_train = np_utils.to_categorical(y_train, nb_classes)
y_test = np_utils.to_categorical(y_test, nb_classes)
def make_model(dense_layer_sizes, filters, kernel_size, pool_size):
'''Creates model comprised of 2 convolutional layers followed by dense layers
dense_layer_sizes: List of layer sizes. This list has one number for each layer
nb_filters: Number of convolutional filters in each convolutional layer
nb_conv: Convolutional kernel size
nb_pool: Size of pooling area for max pooling
'''
model = Sequential()
model.add(Conv2D(filters, (kernel_size, kernel_size),
padding='valid', input_shape=input_shape))
model.add(Activation('relu'))
model.add(Conv2D(filters, (kernel_size, kernel_size)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(pool_size, pool_size)))
model.add(Dropout(0.25))
model.add(Flatten())
for layer_size in dense_layer_sizes:
model.add(Dense(layer_size))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
return model
dense_size_candidates = [[32], [64], [32, 32], [64, 64]]
my_classifier = KerasClassifier(make_model, batch_size=32)
validator = GridSearchCV(my_classifier,
param_grid={'dense_layer_sizes': dense_size_candidates,
# nb_epoch is avail for tuning even when not
# an argument to model building function
'epochs': [3, 6],
'filters': [8],
'kernel_size': [3],
'pool_size': [2]},
scoring='neg_log_loss',
n_jobs=1)
validator.fit(X_train, y_train)
print('The parameters of the best model are: ')
print(validator.best_params_)
# validator.best_estimator_ returns sklearn-wrapped version of best model.
# validator.best_estimator_.model returns the (unwrapped) keras model
best_model = validator.best_estimator_.model
metric_names = best_model.metrics_names
metric_values = best_model.evaluate(X_test, y_test)
for metric, value in zip(metric_names, metric_values):
print(metric, ': ', value)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's look at a couple of different types of tables, and see what how copy_to_pandas handles different types of data. Here is the "cost" table both in dict-of-dicts format and the as a DataFrame.
Step2: It's important to emphasize whats happening. The only column in the DataFrame is "cost". This is because the cost table only has a single data field (also named "cost"). The "commodity","source","destination" columns are all primary key columns. To avoid redundant data in the DataFrame, copy_to_pandas includes this information only as part of the index of the DataFrame.
Step3: This is also a good time to point out that asking "is this row present" is completely different for the "dict-of-dicts" representation than for the DataFrame representation. Choices are good - use whatever version will make your code more readable. Personally, I'd be a little worried that someone might forget to include .index when looking for row presence in pandas, but I also think an experienced pandas developer would be pretty unlikely to make that mistake.
Step4: Now, lets look at the nodes table.
Step5: Wait, why is the "name" information duplicated here? Shouldn't the primary key just be part of the index? The answer, by default, is it depends. If the table has no data fields, then by default it will duplicate the primary key data in both the index of the DataFrame and in the column(s) of the DataFrame. In other words, by default, copy_to_pandas will duplicate data only to avoid having "no column" DataFrames. But, you can override this behavior however you want by specifying non-None booleans as the drop_pk_columns argument.
Step6: This is a DataFrame with an .index but no column. Seems like a strange thing to have, but you can create it just fine if you want.
|
<ASSISTANT_TASK:>
Python Code:
import ticdat.testing.testutils as tdu
from ticdat import TicDatFactory
tdf = TicDatFactory(**tdu.netflowSchema())
dat = tdf.copy_tic_dat(tdu.netflowData())
dat.cost
df_cost = tdf.copy_to_pandas(dat).cost
df_cost
df_cost.index
('Pens', 'Denver', 'Seattle') in df_cost
('Pens', 'Denver', 'Seattle') in df_cost.index
('Pens', 'Denver', 'Seattle') in dat.cost
dat.nodes
tdf.copy_to_pandas(dat).nodes
tdf.copy_to_pandas(dat, drop_pk_columns=True).nodes
tdf.copy_to_pandas(dat, drop_pk_columns=False).cost
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 介绍,Introduction
Step2: 通常,所有的权重都是可以训练的权重。keras自带的layer中只有BatchNormalization有不可训练的权重。BatchNormalization使用不可训练的权重来跟踪训练过程中输入的mean和variance。
Step3: Layers & models also feature a boolean attribute trainable. Its value can be changed.
Step4: When a trainable weight becomes non-trainable, its value is no longer updated during
Step5: 不要混淆layer.trainable属性和layer.__call__()中参数training ,后者是控制该层在推理或者训练模式下是否运行前向过程。
Step6: 典型的迁移学习流程, The typical transfer-learning workflow
Step7: These are the first 9 images in the training dataset -- as you can see, they're all
Step8: We can also see that label 1 is "dog" and label 0 is "cat".
Step9: Besides, let's batch the data and use caching & prefetching to optimize loading speed.
Step10: Using random data augmentation
Step11: Let's visualize what the first image of the first batch looks like after various random
Step12: Build a model
Step13: Train the top layer
Step14: Do a round of fine-tuning of the entire model
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
from tensorflow import keras
layer = keras.layers.Dense(3)
layer.build((None, 4)) # Create the weights
print("weights:", len(layer.weights), layer.weights)
print("trainable_weights:", len(layer.trainable_weights),layer.trainable_weights)
print("non_trainable_weights:", len(layer.non_trainable_weights))
layer = keras.layers.BatchNormalization()
layer.build((None, 4)) # Create the weights
print("weights:", len(layer.weights))
print("trainable_weights:", len(layer.trainable_weights))
print("non_trainable_weights:", len(layer.non_trainable_weights))
layer = keras.layers.Dense(3)
layer.build((None, 4)) # Create the weights
layer.trainable = False # Freeze the layer
print("weights:", len(layer.weights))
print("trainable_weights:", len(layer.trainable_weights))
print("non_trainable_weights:", len(layer.non_trainable_weights))
# Make a model with 2 layers
layer1 = keras.layers.Dense(3, activation="relu")
layer2 = keras.layers.Dense(3, activation="sigmoid")
model = keras.Sequential([keras.Input(shape=(3,)), layer1, layer2])
# Freeze the first layer
layer1.trainable = False
# Keep a copy of the weights of layer1 for later reference
initial_layer1_weights_values = layer1.get_weights()
# Train the model
model.compile(optimizer="adam", loss="mse")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
# Check that the weights of layer1 have not changed during training
final_layer1_weights_values = layer1.get_weights()
np.testing.assert_allclose(
initial_layer1_weights_values[0], final_layer1_weights_values[0]
)
np.testing.assert_allclose(
initial_layer1_weights_values[1], final_layer1_weights_values[1]
)
inner_model = keras.Sequential(
[
keras.Input(shape=(3,)),
keras.layers.Dense(3, activation="relu"),
keras.layers.Dense(3, activation="relu"),
]
)
model = keras.Sequential(
[keras.Input(shape=(3,)), inner_model, keras.layers.Dense(3, activation="sigmoid"),]
)
model.trainable = False # Freeze the outer model
assert inner_model.trainable == False # All layers in `model` are now frozen
assert inner_model.layers[0].trainable == False # `trainable` is propagated recursively
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
train_ds, validation_ds, test_ds = tfds.load(
"cats_vs_dogs",
# Reserve 10% for validation and 10% for test
split=["train[:40%]", "train[40%:50%]", "train[50%:60%]"],
as_supervised=True, # Include labels
)
print("Number of training samples: %d" % tf.data.experimental.cardinality(train_ds))
print(
"Number of validation samples: %d" % tf.data.experimental.cardinality(validation_ds)
)
print("Number of test samples: %d" % tf.data.experimental.cardinality(test_ds))
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for i, (image, label) in enumerate(train_ds.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image)
plt.title(int(label))
plt.axis("off")
size = (150, 150)
train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y))
validation_ds = validation_ds.map(lambda x, y: (tf.image.resize(x, size), y))
test_ds = test_ds.map(lambda x, y: (tf.image.resize(x, size), y))
batch_size = 32
train_ds = train_ds.cache().batch(batch_size).prefetch(buffer_size=10)
validation_ds = validation_ds.cache().batch(batch_size).prefetch(buffer_size=10)
test_ds = test_ds.cache().batch(batch_size).prefetch(buffer_size=10)
from tensorflow import keras
from tensorflow.keras import layers
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal"),
layers.experimental.preprocessing.RandomRotation(0.1),
]
)
import numpy as np
for images, labels in train_ds.take(1):
plt.figure(figsize=(10, 10))
first_image = images[0]
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
augmented_image = data_augmentation(
tf.expand_dims(first_image, 0), training=True
)
plt.imshow(augmented_image[0].numpy().astype("int32"))
plt.title(int(labels[i]))
plt.axis("off")
base_model = keras.applications.Xception(
weights="imagenet", # Load weights pre-trained on ImageNet.
input_shape=(150, 150, 3),
include_top=False,
) # Do not include the ImageNet classifier at the top.
# Freeze the base_model
base_model.trainable = False
# Create new model on top
inputs = keras.Input(shape=(150, 150, 3))
x = data_augmentation(inputs) # Apply random data augmentation
# Pre-trained Xception weights requires that input be normalized
# from (0, 255) to a range (-1., +1.), the normalization layer
# does the following, outputs = (inputs - mean) / sqrt(var)
norm_layer = keras.layers.experimental.preprocessing.Normalization()
mean = np.array([127.5] * 3)
var = mean ** 2
# Scale inputs to [-1, +1]
x = norm_layer(x)
norm_layer.set_weights([mean, var])
# The base model contains batchnorm layers. We want to keep them in inference mode
# when we unfreeze the base model for fine-tuning, so we make sure that the
# base_model is running in inference mode here.
x = base_model(x, training=False)
x = keras.layers.GlobalAveragePooling2D()(x)
x = keras.layers.Dropout(0.2)(x) # Regularize with dropout
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
model.summary()
model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[keras.metrics.BinaryAccuracy()],
)
epochs = 20
model.fit(train_ds, epochs=epochs, validation_data=validation_ds)
# Unfreeze the base_model. Note that it keeps running in inference mode
# since we passed `training=False` when calling it. This means that
# the batchnorm layers will not update their batch statistics.
# This prevents the batchnorm layers from undoing all the training
# we've done so far.
base_model.trainable = True
model.summary()
model.compile(
optimizer=keras.optimizers.Adam(1e-5), # Low learning rate
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[keras.metrics.BinaryAccuracy()],
)
epochs = 10
model.fit(train_ds, epochs=epochs, validation_data=validation_ds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $$\begin{align}\kappa(x; \mu) &
|
<ASSISTANT_TASK:>
Python Code:
g = grid.make_cube_grid__2d_simplex_aluconform(lower_left=[0, 0], upper_right=[1, 1], num_elements=[4, 4], num_refinements=2, overlap_size=[0, 0])
#g.visualize('grid')
#bump = functions.make_expression_function_1x1(g, 'x', 'cos(0.5*pi*x[0])*cos(0.5*pi*x[1])', order=3, name='bump')
#one = functions.make_constant_function_1x1(g, 1.0, name='one')
#diffusion = [one - bump, bump]
#diffusion[0].visualize(g, 'diffusion_affine_part')
#diffusion[1].visualize(g, 'diffusion_component')
#f = functions.make_expression_function_1x1(g, 'x', '0.5*pi*pi*cos(0.5*pi*x[0])*cos(0.5*pi*x[1])', order=3, name='rhs')
#f.visualize(g, 'force')
#g_D = functions.make_constant_function_1x1(g, 0.0, name='dirichlet')
kappa = functions.make_constant_function_1x1(g, 1.0, name='diffusion')
identity = functions.make_constant_function_2x2(g, [[0, 1], [1, 0]], name='id')
f = functions.make_constant_function_1x1(g, 1.0, name='force')
g_D = functions.make_constant_function_1x1(g, 0.0, name='dirichlet')
g_N = functions.make_constant_function_1x1(g, 0.0, name='neumann')
space = gdt.make_cg_space__1x1__p1__fem(g)
#space.visualize("cg_space")
elliptic_op = gdt.make_elliptic_matrix_operator__istl_sparse(kappa, space)
system_assembler = gdt.make_system_assembler(space)
system_assembler.append(elliptic_op)
system_assembler.assemble()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Deterministic, continuous solution
Step2: Stochastic, discrete solution
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import matplotlib.pyplot as pyp
%matplotlib inline
# S -> P*S - B*S*Z - d*S
S = 500
# Z -> B*S*Z + G*R - A*S*Z
Z = 0
# R -> d*S - G*R
R = 0
P = 0.0001 # birth rate
d = 0.01 # 'natural' death percent (per day)
B = 0.0095 # transmission percent (per day)
G = 0.001 # resurect percent (per day)
A = 0.005 # destroy percent (per day)
t = 0
tend = 20
timecourse = [0]
S_tc = [S]
Z_tc = [Z]
R_tc = [R]
while t < tend:
# calculate h_i
h_birth = (S*(S-1)/2)
h_death = S
h_transmission = S*Z
h_resurrect = R
h_destroy = S*Z
R_sum = sum([h_birth*P, h_death*d, h_transmission*B, h_resurrect*G, h_destroy*A])
#print(R_sum)
a_birth = h_birth*P/R_sum
#print('a_birth: ',a_birth)
a_death = h_death*d/R_sum
#print('a_death: ', a_death)
a_transmission = h_transmission*B/R_sum
#print('a_transmission: ', a_transmission)
a_resurrect = h_resurrect*G/R_sum
#print('a_resurrect: ', a_resurrect)
a_destroy = h_destroy*A/R_sum
#print('a_destroy: ', a_destroy)
a = [a_birth, a_death, a_transmission, a_resurrect, a_destroy]
a_sum = sum(a)
r1 = np.random.uniform()
t += - (1./R_sum)*math.log(r1)
timecourse.append(t)
r2 = np.random.uniform()
if r2 > 0 and r2 < sum(a[:1]): # birth
S += 1
#print('birth')
elif r2 > sum(a[:1]) and r2 < sum(a[:2]): # death
S -= 1
R += 1
#print('death')
elif r2 > sum(a[:2]) and r2 < sum(a[:3]): # transmission
S -= 1
Z += 1
#print('transmission')
elif r2 > sum(a[:3]) and r2 < sum(a[:4]): # resurrect
R -= 1
Z += 1
#print('resurrect')
else:
Z -= 1
R += 1
#print('destroy')
S_tc.append(S)
Z_tc.append(Z)
R_tc.append(R)
pyp.plot(timecourse, S_tc)
pyp.plot(timecourse, Z_tc)
pyp.plot(timecourse, R_tc)
print('Susceptible people: ', S)
print('Zombies: ', Z)
print('Dead people: ', R)
import scipy.integrate
import numpy as np
import math
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
k1 = 0.1
k2 = 0.02
k3 = 0.4
k4 = 0.02
def dSdt( S, t):
X, Y = S
dxdt = k1*X - k2*X*Y
dydt = k4*X*Y - k3*Y
return [dxdt, dydt]
time = np.arange(0,300,1)
S0 = np.array([10., 10.] )
result = scipy.integrate.odeint( dSdt, S0, time )
plt.plot( time, result )
na = 6.022e23
# species
X = 100
Y = 100
# parameters
k1 = 0.1 # birth X
k2 = 0.02 # eaten X
k3 = 0.4 # death Y
k4 = 0.02 # reproduce Y
# time
t = 0
t_end = 50
# timecourses
timecourse = [0]
X_tc = [X]
Y_tc = [Y]
# loop
while t < t_end:
h_k1 = X*(X-1)/2
h_k2 = X*Y
h_k3 = Y
h_k4 = X*Y
R_sum = sum([h_k1*k1, h_k2*k2, h_k3*k3, h_k4*k4])
a_k1 = h_k1*k1/R_sum
a_k2 = h_k2*k2/R_sum
a_k3 = h_k3*k3/R_sum
a_k4 = h_k4*k4/R_sum
a = [a_k1, a_k2, a_k3, a_k4]
a_sum = sum(a)
r1 = np.random.uniform()
t += - (1./R_sum)*math.log(r1)
timecourse.append(t)
r2 = np.random.uniform()
if r2 < sum(a[:1]): # k1
X += 1
#print('k1')
elif r2 > sum(a[:1]) and r2 < sum(a[:2]): # k2
X -= 1
#print('k2')
elif r2 > sum(a[:2]) and r2 < sum(a[:3]): # k3
Y -= 1
#print('k3')
else: # k4
Y += 1
#print('k4')
X_tc.append(X)
Y_tc.append(Y)
plt.plot( timecourse, X_tc )
plt.plot( timecourse, Y_tc )
print(Y_tc[-10:])
print(X_tc[-10:])
import math
r1 = [(i+1)/10 for i in range(10)]
R_sum = 30
t = [(-(1./R_sum)*math.log(r)) for r in r1]
import matplotlib.pyplot as pyp
%matplotlib inline
pyp.plot(r1, t)
pyp.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cloning Clip_jax
Step2: pmapping the encoding function and replicating the params.
Step3: Dataset
Step4: Loading tfds
Step5: Model
Step7: Datamodule which makes the numpy dataloaders for the dataset that return batches such that their leading dimension is len(devices)
Step8: batch_per_core should be such that (n_examples//batch_per_core) % no_of_cores == 0
|
<ASSISTANT_TASK:>
Python Code:
import os
assert os.environ["COLAB_TPU_ADDR"], "Make sure to select TPU from Edit > Notebook settings > Hardware accelerator"
import os
if "google.colab" in str(get_ipython()) and "COLAB_TPU_ADDR" in os.environ:
import jax
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
print("Connected to TPU.")
else:
print('No TPU detected. Can be changed under "Runtime/Change runtime type".')
import jax
print("jax version {}".format(jax.__version__))
print("jax backend {}".format(jax.lib.xla_bridge.get_backend().platform))
print(jax.lib.xla_bridge.device_count())
print(jax.local_device_count())
import jax.numpy as jnp
devices = jax.local_devices()
print(f"jax devices:")
devices
%cd /content/
!git clone https://github.com/kingoflolz/CLIP_JAX.git
cd / content / CLIP_JAX
pip install ftfy regex tqdm dm-haiku
import numpy as np
from PIL import Image
import time
import clip_jax
image_fn, text_fn, jax_params, jax_preprocess = clip_jax.load("ViT-B/32", "cpu", jit=True)
jax_params_repl = jax.device_put_replicated(jax_params, devices)
image_fn_pmapped = jax.pmap(image_fn)
ds_name = "imagenette/160px-v2"
data_dir = "/root/tensorflow_datasets"
# @title Choose whether if you want to make a copy of the dataset in the drive
# @markdown Drive can be mounted to download the tfds into the drive for future uses,
# @markdown downloaded ds can be found in `your_drive_path/MyDrive/$ds_name`
to_load_into_drive = False # @param ["False", "True"] {type:"raw"}
if to_load_into_drive:
from google.colab import drive
drive.mount("/content/drive")
!mkdir /content/drive/MyDrive/$ds_name # your_drive_path
data_dir = f"/content/drive/MyDrive/{ds_name}"
import tensorflow as tf
import tensorflow_datasets as tfds
try:
tfds.load(ds_name, data_dir=data_dir)
except:
tfds.load(ds_name, data_dir=data_dir)
len(devices)
class Tpu_data_loader:
def __init__(self, loader, split, batch_per_core, no_of_cores):
self.loader = loader
self.split = split
self.batch_size = batch_per_core * no_of_cores
class NumpyDataModule:
def __init__(self, ds_name: str, data_dir: str):
self.ds_name = ds_name
self.data_dir = data_dir
self.image_size = 224
self.mean = [0.48145466, 0.4578275, 0.40821073]
self.std = [0.48145466, 0.4578275, 0.40821073]
self.ds = None
def preprocess(self, sample):
image = sample["image"]
`uint8` -> `float32`.
image = tf.cast(image, tf.float32)
image = tf.image.resize_with_crop_or_pad(image, self.image_size, self.image_size)
image = (image - self.mean) / (self.std)
image = tf.transpose(image, perm=[2, 0, 1])
return image
def make_dataset(self, split, batch_per_core, no_of_cores):
ds = self.ds[split]
ds = ds.map(self.preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.batch(batch_per_core).batch(no_of_cores)
return Tpu_data_loader(
tfds.as_numpy(ds.prefetch(tf.data.experimental.AUTOTUNE)), split, batch_per_core, no_of_cores
)
def prepare_data(self):
self.ds, ds_info = tfds.load(
self.ds_name,
with_info=True,
data_dir=self.data_dir,
)
return ds_info
dm = NumpyDataModule(ds_name=ds_name, data_dir=data_dir)
ds_info = dm.prepare_data()
train_loader = dm.make_dataset("train", batch_per_core=62, no_of_cores=len(devices))
test_loader = dm.make_dataset("validation", batch_per_core=61, no_of_cores=len(devices))
print(ds_info.splits[train_loader.split].num_examples)
print(ds_info.splits[test_loader.split].num_examples)
import tqdm
def clip_extract(tpu_loader):
clip_features = []
steps = (ds_info.splits[tpu_loader.split].num_examples // tpu_loader.batch_size) + 1
for i, batch in zip(tqdm.trange(steps), tpu_loader.loader):
# the last batch is not parallised.
if i == steps - 1:
clip_encoded_batch = image_fn(jax_params, np.squeeze(batch, axis=0))
else:
clip_encoded_batch = image_fn_pmapped(jax_params_repl, batch)
clip_encoded_batch = jax.device_get(clip_encoded_batch)
clip_features.append(clip_encoded_batch)
clip_flattened_features = [fea.reshape(-1, 512) for fea in clip_features]
coco_clip = np.concatenate(clip_flattened_features)
return coco_clip
clip_train = clip_extract(train_loader)
clip_eval = clip_extract(test_loader)
def make_tfds_and_save(numpy_data, name):
tf_ds = tf.data.Dataset.from_tensor_slices(numpy_data)
tf.data.experimental.save(tf_ds, f"/content/{name}")
return tf_ds
clip_train_ds = make_tfds_and_save(clip_train, "clip_train_ds")
clip_test_ds = make_tfds_and_save(clip_eval, "clip_test_ds")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Assignment 1 (ungraded but important). Read some tutorials
Step2: Assignment 2
Step3: Assignment 3
Step4: Assignment 4
Step5:
Step6: What type of variables are in the dataset?
Step7: Interpret the plot you just made
Step8: Interpret the table you just made
Step9: Assignment 5
|
<ASSISTANT_TASK:>
Python Code:
plt.scatter?
##Some code to run at the beginning of the file, to be able to show images in the notebook
##Don't worry about this cell but run it
#Print the plots in this screen
%matplotlib inline
#Be able to plot images saved in the hard drive
from IPython.display import Image,display
#Make the notebook wider
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
!git clone https://github.com/jvns/pandas-cookbook.git
## Assingment 2 (20)
#Create a list of numbers between 1 and 100 (tip: use range())
spam = list(range(1,100))
print(spam)
#Calculate the mean of the list (tip: use sum() and len())
mean =
print(mean)
#Calculate the mean of the first half of the list (tip: slice the list)
mean_first_half = spam[:50]
print(mean_first_half)
#Check if the first element is larger or equal than the last element (>=)
first_element = spam[0]
last_elment = spam[-1]
print() #this line needs to be completed, printing the result comparison
#Convert the list spam to a tuple
tuple_spam =
print()
#import numpy as np
import numpy as np
#Create a numpy array of numbers between 1 and 10000 (tip: use np.arange())
array =
print(array)
#Calculate the mean of the list (tip: np.mean())
mean =
print(mean)
#Filter the even numbers (tip: the remainder of dividing an even number by 2 is 0) and save the results to array_even
filter_condition =
array_even = array[filter_condition]
print(array_even)
#import pandas
import pandas as pd
#Read a standard csv, no strange things
filename = "data/hw1_csv_st.csv"
df_st =
df_st.head()
#Read a standard csv, careful, the file uses tabs ("\t") as separators
filename = "data/hw1_csv_tab.csv"
df_tab =
df_tab.head()
#Read a standard csv, careful, the file uses a different encoding (no UTF-8)
filename = "data/hw1_csv_enc.csv"
df_enc = pd.read_csv(filename,encoding="iso-8859-1")
df_enc.head()
#Read a standard csv, careful, the file does not have a header
filename = "data/hw1_csv_header.csv"
df_hea =
df_hea.head()
#Read a standard csv, careful, the file uses "m" to indicate a missing value
filename = "data/hw1_csv_weird_na.csv"
df_na =
df_na.head()
#read stata file (the file is in "data/alcohol.dta" and it is a stata file)
df =
#print top of the file to explore it (.head())
#print descriptive statistics and interpret them (.describe())
df.
#keep only the households without kids and use this dataset for the rest of the assignment
filtering_condition =
df_nokids = df.loc[filtering_condition]
df = pd.read_stata("data/alcohol.dta")
#visualize the relationship between number of adults, alcohol consumption and income using the right type of plot
#save the plot as a pdf with the name "hw1_plot.pdf"
#Visualize the relationship between number of adults and number of kids using a contingency table using pd.crosstab(df[x],df[y])
%%html
<!-- TODO -->
<iframe width="560" height="315" src="https://zippy.gfycat.com/ImprobableFemaleBasenji.webm" frameborder="0" allowfullscreen></iframe>
#The only honest pie chart
Image(url="http://www.datavis.ca/gallery/images/pies/PiesIHaveEaten.png")
import seaborn as sns
sns.set(style="ticks")
# Load the example dataset for Anscombe's quartet
df = sns.load_dataset("anscombe")
# Show the results of a linear regression within each dataset
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", size=4,
scatter_kws={"s": 50, "alpha": 0})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import nltk and use its internal download tool to get WordNet
Step2: Import the wordnet module
Step3: Access synsets of a word using the synsets function
Step4: Each synset has a definition function
Step5: List lemmas of a synset
Step6: List hypernyms and hyponyms of a synset
Step7: The closure method of synsets allows us to retrieve the transitive closure of the hypernym, hyponym, etc. relations
Step8: common_hypernyms and lowest_common_hypernyms work in relation to another synset
Step9: To iterate through all synsets, possibly by POS-tag, use all_synsets, which returns a generator
Step10: Excercise (optional)
Step11: Read the embedding into a 2D numpy array. Word forms should be stored in a separate 1D array. Also create a word index, a dictionary that returns the index of each word in the embedding. Vectors should be normalized to a length of 1
Step12: write a function that takes two words and the embedding as input and returns their cosine similarity
Step13: Implement a function that takes a word as a parameter and returns the 5 words that are closest to it in the embedding space
Step14: 11.E3 Vector similarity in WordNet
Step15: write a function that measures the similarity of two synsets based on the cosine similarity of their vectors
Step16: Write a function that takes a synset as input and retrieves the n most similar synsets, using the above embedding
Step17: Build the list of all words that are both in wordnet and the GloVe embedding. On a sample of 100 such words, measure Spearman correlation of synset similarity and vector similarity (use scipy.stats.spearmanr)
|
<ASSISTANT_TASK:>
Python Code:
!pip install nltk
import nltk
nltk.download('wordnet')
from nltk.corpus import wordnet as wn
club_synsets = wn.synsets('club')
print(club_synsets)
for synset in club_synsets:
print("{0}\t{1}".format(synset.name(), synset.definition()))
dog = wn.synsets('dog')[0]
dog.definition()
dog.lemmas()
dog.hypernyms()
dog.hyponyms()
list(dog.closure(lambda s: s.hypernyms()))
cat = wn.synsets('cat')[0]
dog.lowest_common_hypernyms(cat)
dog.common_hypernyms(cat)
dog.path_similarity(cat)
wn.all_synsets(pos='n')
for c, noun in enumerate(wn.all_synsets(pos='n')):
if c > 5:
break
print(noun.name())
!wget http://sandbox.hlt.bme.hu/~recski/stuff/glove.6B.50d.txt.gz
!gunzip -f glove.6B.50d.txt.gz
import numpy as np
# words, word_index, emb = read_embedding('glove.6B.50d.txt')
# emb = normalize_embedding(emb)
# vec_sim('cat', 'dog', word_index, emb)
# print(nearest_n('dog', words, word_index, emb))
# print(nearest_n('king', words, word_index, emb))
# synset_emb = embed_synsets(words, word_index, emb)
# synset_sim(dog, cat, synset_emb)
# nearest_n_synsets(wn.synsets('penguin')[0], synset_emb, 10)
# compare_sims(sample, synset_emb, word_index, emb)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step7: Python Environment
Step8: Next we will call the factorize() function to calculate the factors of an integer.
Step9: The primes_between() function can tell us how many prime numbers there are in an
Step10: Additionally, we can combine len() with another built-in method, sum(), to calculate
Step11: This result makes sense intuitively because prime numbers are known to become less
Step12: This code tells Python to first convert each prime between 1 and 1066 to a string and
Step14: These examples demonstrate how Python is a modern, multi-paradigmatic language. More
Step15: With the simple wrapper above, we can get access to memory that can be shared by both
Step16: To double-check we show that the buffer is indeed a numpy array.
Step17: To send the buffer pointer to programmable logic, we use its physical address which
Step18: In this short example, we showed a simple allocation of a numpy array that is now ready
Step19: With the wake_up function defined, we then can add a new task to the event loop.
|
<ASSISTANT_TASK:>
Python Code:
Factors-and-primes functions.
Find factors or primes of integers, int ranges and int lists
and sets of integers with most factors in a given integer interval
def factorize(n):
Calculate all factors of integer n.
factors = []
if isinstance(n, int) and n > 0:
if n == 1:
factors.append(n)
return factors
else:
for x in range(1, int(n**0.5)+1):
if n % x == 0:
factors.append(x)
factors.append(n//x)
return sorted(set(factors))
else:
print('factorize ONLY computes with one integer argument > 0')
def primes_between(interval_min, interval_max):
Find all primes in the interval.
primes = []
if (isinstance(interval_min, int) and interval_min > 0 and
isinstance(interval_max, int) and interval_max > interval_min):
if interval_min == 1:
primes = [1]
for i in range(interval_min, interval_max):
if len(factorize(i)) == 2:
primes.append(i)
return sorted(primes)
else:
print('primes_between ONLY computes over the specified range.')
def primes_in(integer_list):
Calculate all unique prime numbers.
primes = []
try:
for i in (integer_list):
if len(factorize(i)) == 2:
primes.append(i)
return sorted(set(primes))
except TypeError:
print('primes_in ONLY computes over lists of integers.')
def get_ints_with_most_factors(interval_min, interval_max):
Finds the integers with the most factors.
max_no_of_factors = 1
all_ints_with_most_factors = []
# Find the lowest number with most factors between i_min and i_max
if interval_check(interval_min, interval_max):
for i in range(interval_min, interval_max):
factors_of_i = factorize(i)
no_of_factors = len(factors_of_i)
if no_of_factors > max_no_of_factors:
max_no_of_factors = no_of_factors
results = (i, max_no_of_factors, factors_of_i,\
primes_in(factors_of_i))
all_ints_with_most_factors.append(results)
# Find any larger numbers with an equal number of factors
for i in range(all_ints_with_most_factors[0][0]+1, interval_max):
factors_of_i = factorize(i)
no_of_factors = len(factors_of_i)
if no_of_factors == max_no_of_factors:
results = (i, max_no_of_factors, factors_of_i, \
primes_in(factors_of_i))
all_ints_with_most_factors.append(results)
return all_ints_with_most_factors
else:
print_error_msg()
def interval_check(interval_min, interval_max):
Check type and range of integer interval.
if (isinstance(interval_min, int) and interval_min > 0 and
isinstance(interval_max, int) and interval_max > interval_min):
return True
else:
return False
def print_error_msg():
Print invalid integer interval error message.
print('ints_with_most_factors ONLY computes over integer intervals where'
' interval_min <= int_with_most_factors < interval_max and'
' interval_min >= 1')
factorize(1066)
len(primes_between(1, 1066))
primes_1066 = primes_between(1, 1066)
primes_1066_average = sum(primes_1066) / len(primes_1066)
primes_1066_average
primes_1066_ends3 = [x for x in primes_between(1, 1066)
if str(x).endswith('3')]
print('{}'.format(primes_1066_ends3))
len(primes_1066_ends3) / len(primes_1066)
import numpy as np
import pynq
def get_pynq_buffer(shape, dtype):
Simple function to call PYNQ's memory allocator with numpy attributes
try:
return pynq.allocate(shape, dtype)
except RuntimeError:
print('Load an overlay to allocate memory')
return
buffer = get_pynq_buffer(shape=(4,4), dtype=np.uint32)
buffer
isinstance(buffer, np.ndarray)
try:
pl_buffer_address = hex(buffer.physical_address)
pl_buffer_address
except AttributeError:
print('Load an overlay to allocate memory')
import asyncio
import random
import time
# Coroutine
async def wake_up(delay):
'''A function that will yield to asyncio.sleep() for a few seconds
and then resume, having preserved its state while suspended
'''
start_time = time.time()
print(f'The time is: {time.strftime("%I:%M:%S")}')
print(f"Suspending coroutine 'wake_up' at 'await` statement\n")
await asyncio.sleep(delay)
print(f"Resuming coroutine 'wake_up' from 'await` statement")
end_time = time.time()
sleep_time = end_time - start_time
print(f"'wake-up' was suspended for precisely: {sleep_time} seconds")
delay = random.randint(1,5)
my_event_loop = asyncio.get_event_loop()
try:
print("Creating task for coroutine 'wake_up'\n")
wake_up_task = my_event_loop.create_task(wake_up(delay))
my_event_loop.run_until_complete(wake_up_task)
except RuntimeError as err:
print (f'{err}' +
' - restart the Jupyter kernel to re-run the event loop')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Gensim word2vec
Step2: Train a word2vec model
Step3: Create a representation of each paper
Step4: Load the saved pickle and check
Step5: filter words by DF
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#config InlineBackend.figure_format = 'pdf'
from IPython.core.display import HTML
import gensim as gen
import gensim.models.word2vec as w2v
import matplotlib.pyplot as plt
from nltk.tokenize import WhitespaceTokenizer
import numpy as np
import os
import pandas as pd
try:
import cPickle as pickle
except:
import pickle
import re
import scipy.stats as stats
import scipy.sparse as sp
import string
import sys
import csv
# load the pickle containing the document-term matrix,
# put the abstracts in, and dump it to a file.
fyear = 1988
tyear = 2015
dt_fpath = 'DT_%d_%d_wabs.p'%(fyear, tyear)
with open(dt_fpath, 'r') as f:
info = pickle.load(f)
info.keys()
list_abs = info['abstracts']
list_abs[:2]
# make each abstract a list of words
list_list_abs = [ab.split(' ') for ab in list_abs if ab is not None]
print list_list_abs[20]
def paper_dataframe(fpath):
rows = []
with open(fpath, 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
# Each read gives ['Id', 'Title', 'EventType', 'PdfName', 'Abstract', 'PaperText']
reader.next()
for row in reader:
rows.append(tuple(row))
data = pd.DataFrame(rows, columns=['Id', 'Title', 'EventType',
'PdfName', 'Abstract', 'PaperText'])
return data
text = ',sdf,.-23\][](s)'
re.sub(r'([^\w])+', ' ', text, flags=re.DOTALL)
def tokenize_simple(text):
# replace spaces with one space
text = re.sub(r'\s+', ' ', text, flags=re.DOTALL)
# remove non-English words
text = re.sub(r'[^\w]+', ' ', text, flags=re.DOTALL)
# naive tokenization
tokens = [w.lower().strip() for w in text.split(' ') if len(w) > 1]
return tokens
dframe = paper_dataframe('Papers1988_2015.csv')
n_docs = dframe.shape[0]
tok_papers = []
tok_abstracts = []
for i in xrange(n_docs):
paper = dframe['PaperText'][i]
paper_tokens = tokenize_simple(paper)
tok_papers.append(paper_tokens)
ab = list_abs[i]
if ab is None:
ab_tokens = []
else:
ab_tokens = tokenize_simple(ab)
tok_abstracts.append(ab_tokens)
# size means the latent dimension
# sentences = an iterable where each item is a list of words
size = 50
window = 5
dest_fname = 'w2v_size%d_win%d.p'%(size, window)
model = w2v.Word2Vec(tok_papers, size=size, window=window, min_count=5, workers=4)
model.save(dest_fname)
model.wv.similarity('neural', 'deep')
model.wv.similarity('neural', 'kernel')
model.wv.doesnt_match('supervised unsupervised neuron reinforcement'.split())
model.wv.doesnt_match('kernel gretton hsic mmd'.split())
model.wv['kernel']
'kernel' in model.wv
titles = info['titles']
# each element is the representation of the paper.
# This is a matrix with each row corresponding to the embedding
# of a word in the abstract and the title.
paper_reps = []
for i in xrange(n_docs):
title_tokens = tokenize_simple(titles[i])
rep_words = tok_abstracts[i] + title_tokens
# embed each word in rep_words (if in the vocabulary)
rep = []
for w in rep_words:
# only embed words that are in the vocabulary
if w in model.wv:
embed = model.wv[w]
rep.append(embed)
mat = np.vstack(rep)
paper_reps.append(mat)
len(paper_reps)
# save the pickle with the paper representations
dt_dest = 'DT_%d_%d_wembed.p'%(fyear, tyear)
info['paper_reps'] = paper_reps
with open(dt_dest, 'w') as f:
pickle.dump(info, f)
with open('DT_%d_%d_wembed.p'%(fyear, tyear), 'r') as f:
info = pickle.load(f)
info.keys()
DT = info['DT']
abstracts = info['abstracts']
paper_reps = info['paper_reps']
titles = info['titles']
words = info['words']
# document frequency of each word
n_docs = DT.shape[0]
DF = np.array( (DT > 0).sum(0) )[0]
df_lb = 7
df_ub = int(0.15*n_docs)
print('n = #docs: %d'%n_docs)
print('original #words: %d'%len(words))
print('#words with %d <= df: %d'% (df_lb, np.sum(DF>=df_lb) ) )
print('#words with df <= %d: %d'% (df_ub, np.sum(DF<=df_ub) ) )
df_I = np.logical_and(DF>=df_lb, DF<=df_ub)
print('#words with %d <= df <= %d: %d'%
(df_lb, df_ub, np.sum( df_I) ) )
df_words = np.array(words)[df_I]
print df_words.tolist()
# filter out words
fDT = DT[:, df_I]
fwords = np.array(words)[df_I].tolist()
info['DT'] = fDT
info['words'] = fwords
dffiltered_fname = 'DT_%d_%d_wem_df%d_%d.p'%(fyear, tyear, df_lb, df_ub)
with open(dffiltered_fname, 'w') as f:
pickle.dump(info, f)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Cleaning up and summarizing the data
Step2: 3) Splitting data into training and testing sets
Step3: 4. Running algorithms on the data
Step4: 93-95% accuracy, not too shabby! Have a look and see how random forests with suboptimal hyperparameters fare. We got around 91-92% accuracy on the out of the box (untuned) random forests, which actually isn't terrible.
Step5: Looks good! This is similar performance to what we saw in the random forests.
Step6: Looks like this neural network (given this dataset, architecture, and hyperparameterisation) is doing slightly worse on the spam dataset. That's okay, it could still be picking up on a signal that the random forest and SVM weren't.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
# Import the dataset
dataset_path = "spam_dataset.csv"
dataset = pd.read_csv(dataset_path, sep=",")
# Take a peak at the data
dataset.head()
# Reorder the data columns and drop email_id
cols = dataset.columns.tolist()
cols = cols[2:] + [cols[1]]
dataset = dataset[cols]
# Examine shape of dataset and some column names
print dataset.shape
print dataset.columns.values[0:10]
# Summarise feature values
dataset.describe()
# Convert dataframe to numpy array and split
# data into input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-1].astype(float)
y = npArray[:,-1]
from sklearn import preprocessing
from sklearn.cross_validation import train_test_split
# Scale and split dataset
X_scaled = preprocessing.scale(X)
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X_scaled, y, random_state=1)
from sklearn import metrics
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Search for good hyperparameter values
# Specify values to grid search over
n_estimators = np.arange(1, 30, 5)
max_features = np.arange(1, X.shape[1], 10)
max_depth = np.arange(1, 100, 10)
hyperparameters = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth}
# Grid search using cross-validation
gridCV = GridSearchCV(RandomForestClassifier(), param_grid=hyperparameters, cv=10, n_jobs=4)
gridCV.fit(XTrain, yTrain)
best_n_estim = gridCV.best_params_['n_estimators']
best_max_features = gridCV.best_params_['max_features']
best_max_depth = gridCV.best_params_['max_depth']
# Train classifier using optimal hyperparameter values
# We could have also gotten this model out from gridCV.best_estimator_
clfRDF = RandomForestClassifier(n_estimators=best_n_estim, max_features=best_max_features, max_depth=best_max_depth)
clfRDF.fit(XTrain, yTrain)
RF_predictions = clfRDF.predict(XTest)
print (metrics.classification_report(yTest, RF_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))
from sklearn.svm import SVC
# Search for good hyperparameter values
# Specify values to grid search over
g_range = 2. ** np.arange(-15, 5, step=2)
C_range = 2. ** np.arange(-5, 15, step=2)
hyperparameters = [{'gamma': g_range,
'C': C_range}]
# Grid search using cross-validation
grid = GridSearchCV(SVC(), param_grid=hyperparameters, cv= 10)
grid.fit(XTrain, yTrain)
bestG = grid.best_params_['gamma']
bestC = grid.best_params_['C']
# Train SVM and output predictions
rbfSVM = SVC(kernel='rbf', C=bestC, gamma=bestG)
rbfSVM.fit(XTrain, yTrain)
SVM_predictions = rbfSVM.predict(XTest)
print metrics.classification_report(yTest, SVM_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, SVM_predictions),2)
from multilayer_perceptron import multilayer_perceptron
# Search for good hyperparameter values
# Specify values to grid search over
layer_size_range = [(3,2),(10,10),(2,2,2),10,5] # different networks shapes
learning_rate_range = np.linspace(.1,1,3)
hyperparameters = [{'hidden_layer_sizes': layer_size_range, 'learning_rate_init': learning_rate_range}]
# Grid search using cross-validation
grid = GridSearchCV(multilayer_perceptron.MultilayerPerceptronClassifier(), param_grid=hyperparameters, cv=10)
grid.fit(XTrain, yTrain)
# Output best hyperparameter values
best_size = grid.best_params_['hidden_layer_sizes']
best_best_lr = grid.best_params_['learning_rate_init']
# Train neural network and output predictions
nnet = multilayer_perceptron.MultilayerPerceptronClassifier(hidden_layer_sizes=best_size, learning_rate_init=best_best_lr)
nnet.fit(XTrain, yTrain)
NN_predictions = nnet.predict(XTest)
print metrics.classification_report(yTest, NN_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, NN_predictions),2)
# here's a rough solution
import collections
# stick all predictions into a dataframe
predictions = pd.DataFrame(np.array([RF_predictions, SVM_predictions, NN_predictions])).T
predictions.columns = ['RF', 'SVM', 'NN']
predictions = pd.DataFrame(np.where(predictions=='yes', 1, 0),
columns=predictions.columns,
index=predictions.index)
# initialise empty array for holding predictions
ensembled_predictions = np.zeros(shape=yTest.shape)
# majority vote and output final predictions
for test_point in range(predictions.shape[0]):
predictions.iloc[test_point,:]
counts = collections.Counter(predictions.iloc[test_point,:])
majority_vote = counts.most_common(1)[0][0]
# output votes
ensembled_predictions[test_point] = majority_vote.astype(int)
print "The majority vote for test point", test_point, "is: ", majority_vote
# Get final accuracy of ensembled model
yTest[yTest == "yes"] = 1
yTest[yTest == "no"] = 0
print metrics.classification_report(yTest.astype(int), ensembled_predictions.astype(int))
print "Ensemble Accuracy:", round(metrics.accuracy_score(yTest.astype(int), ensembled_predictions.astype(int)),2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load some house sales data
Step2: Exploring the data for housing sales
Step3: Create a simple regression model of sqft_living to price
Step4: Build the regression model using only sqft_living as a feature
Step5: Evaluate the simple model
Step6: RMSE of about \$255,170!
Step7: Above
Step8: Explore other features in the data
Step9: Pull the bar at the bottom to view more of the data.
Step10: Comparing the results of the simple model with adding more features
Step11: The RMSE goes down from \$255,170 to \$179,508 with more features.
Step12: <img src="house-5309101200.jpg">
Step13: In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better.
Step14: <img src="house-1925069082.jpg">
Step15: In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house.
Step16: <img src="house-bill-gates.jpg">
Step17: The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.)
Step18: 2. Filtering data
Step19: 3. Building a regression model with several more features
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('home_data.gl/')
sales
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="sqft_living", y="price")
train_data,test_data = sales.random_split(.8,seed=0)
sqft_model = graphlab.linear_regression.create(train_data, target='price', features=['sqft_living'])
print test_data['price'].mean()
print sqft_model.evaluate(test_data)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(test_data['sqft_living'],test_data['price'],'.',
test_data['sqft_living'],sqft_model.predict(test_data),'-')
sqft_model.get('coefficients')
my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
sales[my_features].show()
sales.show(view='BoxWhisker Plot', x='zipcode', y='price')
my_features_model = graphlab.linear_regression.create(train_data,target='price',features=my_features)
print my_features
print sqft_model.evaluate(test_data)
print my_features_model.evaluate(test_data)
house1 = sales[sales['id']=='5309101200']
house1
print house1['price']
print sqft_model.predict(house1)
print my_features_model.predict(house1)
house2 = sales[sales['id']=='1925069082']
house2
print sqft_model.predict(house2)
print my_features_model.predict(house2)
bill_gates = {'bedrooms':[8],
'bathrooms':[25],
'sqft_living':[50000],
'sqft_lot':[225000],
'floors':[4],
'zipcode':['98039'],
'condition':[10],
'grade':[10],
'waterfront':[1],
'view':[4],
'sqft_above':[37500],
'sqft_basement':[12500],
'yr_built':[1994],
'yr_renovated':[2010],
'lat':[47.627606],
'long':[-122.242054],
'sqft_living15':[5000],
'sqft_lot15':[40000]}
print my_features_model.predict(graphlab.SFrame(bill_gates))
sales[sales['zipcode'] == 98039]
sales_1 = sales[sales['zipcode'] == '98039']
print sales_1['price'].mean()
house_3 = sales[(sales['sqft_living'] > 2000) & (sales['sqft_living'] < 4000 )]
print sales.shape
print house_3.shape
print 9111.0/21613.0
advanced_features = [
'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode',
'condition', # condition of house
'grade', # measure of quality of construction
'waterfront', # waterfront property
'view', # type of view
'sqft_above', # square feet above ground
'sqft_basement', # square feet in basement
'yr_built', # the year built
'yr_renovated', # the year renovated
'lat', 'long', # the lat-long of the parcel
'sqft_living15', # average sq.ft. of 15 nearest neighbors
'sqft_lot15', # average lot size of 15 nearest neighbors
]
advanced_features_model = graphlab.linear_regression.create(train_data, target='price', features=advanced_features)
print my_features_model.evaluate(test_data)
print advanced_features_model.evaluate(test_data)
print 178861.74561891545 - 156757.5481062501
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Face Generation
Step3: Explore the Data
Step5: CelebA
Step7: Preprocess the Data
Step10: Input
Step13: Discriminator
Step16: Generator
Step19: Loss
Step22: Optimization
Step25: Neural Network Training
Step27: Train
Step29: MNIST
Step31: CelebA
|
<ASSISTANT_TASK:>
Python Code:
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
DON'T MODIFY ANYTHING IN THIS CELL
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
# TODO: Implement Function
inputs_real = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs_real, inputs_z, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def discriminator(images, reuse=False):
Create the discriminator network
:param image: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
# TODO: Implement Function
alpha=0.2
# TODO: Implement Function
with tf.variable_scope('discriminator', reuse=reuse):
# input 28x28x3
x1 = tf.layers.conv2d(images, 64, 5, strides=1,padding='same')
relu1 = tf.maximum(alpha*x1, x1)
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
x3 = tf.layers.conv2d(relu2, 256, 5, strides=1, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha*bn3, bn3)
flat = tf.reshape(relu3, (-1, 7*7*512))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_discriminator(discriminator, tf)
def generator(z, out_channel_dim, is_train=True):
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
# TODO: Implement Function
alpha = 0.2
# TODO: Implement Function
with tf.variable_scope('generator', reuse=False if is_train==True else True):
x1 = tf.layers.dense(z, 7*7*512)
x1 = tf.reshape(x1, (-1, 7, 7, 512))
x1 = tf.layers.batch_normalization(x1, training=is_train)
x1 = tf.maximum(alpha*x1, x1)
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides = 1, padding='same')
x2 = tf.layers.batch_normalization(x2, training=is_train)
x2 = tf.maximum(alpha*x2, x2)
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides = 2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=is_train)
x3 = tf.maximum(alpha*x3, x3)
logits = tf.layers.conv2d_transpose(x3, out_channel_dim, 5, strides = 2, padding='same')
out = tf.tanh(logits)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_generator(generator, tf)
def model_loss(input_real, input_z, out_channel_dim):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
# TODO: Implement Function
g_model = generator(input_z, out_channel_dim, is_train=True)
d_model_real, d_logits_real = discriminator(input_real, reuse=False)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_loss(model_loss)
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# TODO: Implement Function
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
all_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
g_update_ops = [var for var in all_update_ops if var.name.startswith('generator')]
d_update_ops = [var for var in all_update_ops if var.name.startswith('discriminator')]
with tf.control_dependencies(d_update_ops):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
with tf.control_dependencies(g_update_ops):
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_opt(model_opt, tf)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
# TODO: Build Model
z_size = z_dim
steps = 0
input_real, input_z, testt_ = model_inputs(*data_shape[1:4], z_dim)
d_loss, g_loss = model_loss(input_real, input_z, data_shape[3])
d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
# TODO: Train Model
steps += 1
batch_images = 2*batch_images
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_opt, feed_dict={input_z: batch_z})
if steps % 10 == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(epoch_i+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
if steps % 100 == 0:
show_generator_output(sess, 6, input_z, data_shape[3], data_image_mode)
batch_size = 32
z_dim = 100
learning_rate = 0.002
beta1 = 0.5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
batch_size = 128
z_dim = 100
learning_rate = 0.002
beta1 = 0.5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise
Step2: Exercise
Step3: Exercise
Step4: Computing the interval between the 2.5th to the 97.5th percentile effectively gives you a centered 95% mass of the distribution. Knowing this range can be quite useful in other data analysis cases.
Step5: Here is an attempt at writing a numpy-vectorized version of the simulation.
|
<ASSISTANT_TASK:>
Python Code:
import networkx as nx
G = nx.read_gpickle('datasets/divvy_2013/divvy_graph.pkl')
total_trips = sum([d['count'] for _,_,d in G.edges(data=True)])
print(total_trips)
float(total_trips) / len(G.nodes()) ** 2
from collections import Counter
import matplotlib.pyplot as plt
%matplotlib inline
num_trips = [d['count'] for _,_,d in G.edges(data=True)]
plt.bar(Counter(num_trips).keys(), Counter(num_trips).values())
import numpy as np
np.percentile(num_trips, [2.5, 97.5, 100])
# Grab all the nodes from G.
G_random = nx.Graph()
G_random.add_nodes_from(G.nodes(data=True))
G_random.nodes(data=True)
# Total number of possible edges.
edges = len(G.nodes())**2
# Recall that the total trips was recorded in the variable total_trips
total_trips
# coins = 800000
# slots = 900000
trip_edges = np.random.randint(edges, size=total_trips)
trips_in_each_edge = np.histogram(trip_edges, bins=np.arange(edges+1))[0]
trips_in_each_edge
plt.hist(trips_in_each_edge)
print(np.mean(trips_in_each_edge), np.var(trips_in_each_edge))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define the input and output files and input parameters for operator estimation
Step2: Load spectrums from input files, and calculate linear regression based on well impedance spectrum on log-log scale
Step3: Calculate raw operator, based on the ratio of trend spectrum from wells and normalized seismic amplitude spectrum
Step4: Precalculation for inverse dft, to transform raw operator to time domain.
|
<ASSISTANT_TASK:>
Python Code:
import sys
import numpy as np
from scipy.stats import linregress
import matplotlib.pyplot as plt
%matplotlib inline
#Input spectrum files
seisfile="F2_01_seismic_amplitude_spectrum.dat"
wellfile="F2_01_well_AI_amplitude_spectrum.dat"
#Shape parameter for Kaiser window
beta=150
#Normalized amplitude treshold, smaller amplitudes will be skipped during computation
treshold=0.2
#Operator phase spectrum depending on polarity of the seismic data
#SEG normal(AI increase = through): phase=0; SEG reverse (AI increase = Peak): phase=180
phase=0
#Number of samples of the operator
num=80
#Ouput operator file
operatorfile="F2_01_ssb_operator.dat"
#Load exported spectrums from OpendTect
freqseis, ampseis=np.loadtxt(seisfile, unpack=True)
freqwell, ampwell=np.loadtxt(wellfile, unpack=True)
# dB to amplitude conversion
ampwell=np.power(10,ampwell/20)
ampseis=np.power(10,ampseis/20)
#Normalize seismic spectrum
normseis = ampseis / np.max(ampseis)
#Calculate logarithmic well spectrum
logfreq= np.log10(freqwell)
logamp=np.log10(ampwell)
#Linear regression on logarithmic well spectrum
slope, intercept, rvalue, pvalue, stderr = linregress(logfreq,logamp)
print ('Regression results:')
print ("Intercept:", intercept)
print ("Slope :", slope)
print ("R-value :", rvalue)
#Plot well based AI spectrum with regression line
lintrend=intercept+(slope)*logfreq
plt.figure(0)
plt.title('Well Impedance spectrum')
plt.scatter(logfreq,logamp, label="AI impedance spectrum")
plt.xlabel("log10(frequency)")
plt.ylabel("log10(amplitude)")
plt.plot(logfreq,lintrend, label="Trend line", linewidth=3, color='red')
plt.xlim(np.min(logfreq),np.max(logfreq))
plt.legend()
plt.grid()
plt.show()
#Reflectivity trend well spectrum
WelltrendSpectrum = np.power(freqseis,(1+slope))
NormWellTrendSpectrum = WelltrendSpectrum / max(WelltrendSpectrum)
#Calculate residual spectrum
ResidualSpectrum=np.zeros(len(normseis))
for i in range(len(normseis)):
if normseis[i]>treshold:
ResidualSpectrum[i]= WelltrendSpectrum[i] / normseis[i]
#Normalize residual spectrum
ResidualSpectrum=ResidualSpectrum / np.max(ResidualSpectrum)
#Plot normalized seismic spectrum with well trend spectrum
plt.figure(1)
thold=np.ones(len(freqseis))
thold=treshold*thold
plt.title('Seismic spectrums')
plt.plot(freqseis,normseis, label='Normalized seismic amplitude spectrum')
plt.plot(freqseis,NormWellTrendSpectrum, label='Regression based reflectivity spectrum')
plt.plot(freqseis,ResidualSpectrum, label='Frequency domain raw operator', color='red')
plt.plot(freqseis,thold, label='Amplitude treshold', color='grey')
plt.xlabel('Frequency [Hz]')
plt.ylabel('Normalized Amplitude')
plt.ylim(0,1.5)
plt.xlim(0,100)
plt.legend()
plt.grid()
plt.show()
#Calculate dt
dt=1/(2*np.max(freqseis))
#Setup complex amplitude spectrum for ifft with phase assumption
cspectrum_poz=ResidualSpectrum*(np.cos(np.radians(phase))+1j*np.sin(np.radians(phase)))
cspectrum_neg=ResidualSpectrum*(np.cos(-1*np.radians(phase))+1j*np.sin(-1*np.radians(phase)))
rev_cspectrum_neg=np.fliplr([cspectrum_neg])[0]
input_cspectrum=np.append(cspectrum_poz,rev_cspectrum_neg)
#Calculate ifft and reorder arrays
t_op=np.fft.ifft(input_cspectrum)
start_t=(-1/2)*dt*(len(input_cspectrum))+dt
t_shift=np.linspace(start_t,-1*start_t,len(t_op))-dt/2
t_op_shift=np.fft.ifftshift(t_op)
#Tapering of the time domain operator using a Kaiser window, and calculation of the operator triming indexes, and plot the final operator
#Tapering
window_kaiser=np.kaiser(len(t_shift),beta)
t_op_final=t_op_shift*window_kaiser
#Operator trimming indexes
start_i=((int(len(t_shift)/2))-int(num/2))
stop_i=(int(len(t_shift)/2))+int(num/2)+1
#Plot final time domain operator
plt.figure(2)
plt.title('Spectral blueing operator')
plt.plot(t_shift,t_op_final.real, label='Time domain operator')
#plt.fill_between(t_shift, t_op_final.real,0, t_op_final > 0.0, interpolate=False, hold=True, color='blue', alpha = 0.5)
plt.xlim(t_shift[start_i],t_shift[stop_i])
plt.ylim(-0.07,0.25)
plt.xlabel('Time [s]')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
#Save final operator
np.savetxt(operatorfile,t_op_final[start_i:stop_i].real)
#QC operator by transform tapered operator to frequency domain
bt_op=np.fft.fft(t_op_final)
backfreq=np.fft.fftfreq(len(t_op_final),dt)
#Keeping only the positive frequencies
backfreq_poz=backfreq[:int(len(backfreq)/2)]
bt_op_poz=bt_op[:int(len(bt_op)/2)]
output_spectrum=bt_op_poz*normseis
plt.figure(3)
plt.title('QC tapering')
plt.plot(backfreq_poz,abs(bt_op_poz), label='Tapered backtransformed operator', color='red')
plt.plot(freqseis,ResidualSpectrum, label='Frequency domain raw operator')
#plt.fill_between(backfreq_poz, abs(bt_op_poz),0, abs(bt_op_poz) > 0.0, interpolate=False, hold=True, color='red', alpha = 0.3)
plt.ylabel('Normalized amplitude')
plt.xlabel('Frequency (Hz)')
plt.xlim(0,100)
plt.legend()
plt.grid()
plt.show()
print('Operator is saved to:',operatorfile)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now using NERC region shapefiles created by DHS
Step2: Maps of 2001 and 2017 annual values
Step3: Maps of difference from national average
Step4: Read file with facility location, generation, and fuel data
Step5: Drop plants without lat/lon data
Step6: Because I have monthly data for every facility from 2001-2017, there are lots of duplicate rows. No need to do a spatial join on every row. Just keep one instance of each facility in each year.
Step7: Use Point from Shapely to create the geometry list of facility locations. crs is the coordinate reference system that translates lat/lon into a specific map projection.
Step8: Spatial join of NERC dataframe with polygons and facility dataframe with points
Step9: Method 2 (faster when using the default operation)
Step10: Make the new geometry of facility locations
Step11: Create new dataframe with the data I want to keep and the new geometry
Step12: Changing the crs to see what happens
Step13: NEW - read file with only plant id, state, and lat/lon
Step14: The nerc column is labels from a spatial join using this same shapefile and lat/lon data in QGIS.
Step15: Every plant id is unique
Step16: Method 1 - use defaults
Step17: We can already see that there are more rows in this dataframe than there were power plants.
Step18: Method 2 - use 'within'
Step19: Same size dataframe as from Method 1. Yet again, there appear to be extra results.
Step20: Method 3 - use 'contains'
Step21: Still too many results
Step22: What facilities are missing?
Step23: I figured out that plant id 641 is one that shows up in both FRCC and SERC
Step24: So 641 definitely is in both NERC regions. And apparently the regions overlap? But they aren't supposed to. And I didn't have any plants show up in multiple regions when doing the spatial join in QGIS.
Step25: There are 36 plants with duplicate NERC regions.
Step26: No difference in score when X is preprocessed with StandardScaler
|
<ASSISTANT_TASK:>
Python Code:
# EIA NERC region shapefile, which has an "Indeterminate" region
# path = os.path.join(data_path, 'NERC_Regions_EIA', 'NercRegions_201610.shp')
# regions = gpd.read_file(path)
# regions.crs
path = os.path.join(data_path, 'nercregions', 'NERCregions.shp')
regions_nerc = gpd.read_file(path)
regions_nerc['nerc'] = regions_nerc['NERCregion']
regions_nerc
regions_nerc.boundary.plot()
path = os.path.join(data_path, 'NERC_Regions', 'NERC_Regions.shp')
regions = gpd.read_file(path)
regions.columns = regions.columns.str.lower()
regions['nerc'] = regions['name'].str.split().str[-1].str.strip('()')
# regions = regions.loc[:, ['nerc', 'geometry']]
regions.head()
regions.loc[regions['nerc']=='WECC'].plot(cmap='tab20', alpha=0.5)
regions.plot()
regions.to_crs(epsg=2163).plot(cmap='tab20', alpha=0.5,
linewidth=0.5, edgecolor='0.1')
# This is slow!
regions_nerc = regions.dissolve(by='nerc', as_index=False)
regions_nerc.to_crs(epsg=2163).plot()
path = join(data_path, 'NERC_Regions', 'Aggregated NERC Regions.geojson')
regions_nerc.to_file(path, driver='GeoJSON')
fiona.open()
path = join(data_path, 'NERC_Regions', 'Aggregated NERC Regions.geojson')
regions_nerc = gpd.read_file(path, driver='GeoJSON')
path = os.path.join(data_path, 'cb_2016_us_state_20m', 'cb_2016_us_state_20m.shp')
states = gpd.read_file(path)
states.crs
drop_states = ['Alaska', 'Hawaii', 'Puerto Rico']
states = states.loc[~states['NAME'].isin(drop_states)]
states.to_crs(epsg=2163).plot()
path = join(data_path, 'final NERC data',
'Summary table {}.csv'.format(file_date))
index = pd.read_csv(path)
index
for nerc in regions_nerc['nerc'].unique():
try:
val_2017 = index.loc[index['nerc']==nerc, '2017'].values[0]
val_2001 = index.loc[index['nerc']==nerc, '2001'].values[0]
reduce = index.loc[index['nerc']==nerc, 'Percent Reduction'].values[0]
# print(val)
regions_nerc.loc[regions_nerc['nerc']==nerc, 2017] = val_2017
regions_nerc.loc[regions_nerc['nerc']==nerc, 2001] = val_2001
regions_nerc.loc[regions_nerc['nerc']==nerc, 'reduction'] = '{:.0%}'.format(reduce)
regions_nerc.loc[regions_nerc['nerc']==nerc, 'reduction value'] = reduce
except:
pass
regions_nerc
regions_albers = regions_nerc.to_crs(epsg=2163)
states_albers = states.to_crs(epsg=2163)
regions_albers.plot(column=2001, cmap='cividis_r', edgecolor='0.1', linewidth=1)
def plot_nerc_annual(regions_proj, states_proj, data_col, text_col,
cmap='cividis_r', vmin=None, vmax=None, title=None,
cbar_title=None, **kwargs):
states_ec = kwargs.get('states_ec', '0.6')
regions_ec = kwargs.get('regions_ec', '0.2')
regions_lw = kwargs.get('regions_lw', 0.75)
font_size = kwargs.get('font_size', 9)
bbox_alpha = kwargs.get('bbox_alpha', 0.7)
FRCC_x = kwargs.get('FRCC_x', 4.75)
SERC_x = kwargs.get('SERC_x', 2)
SERC_y = kwargs.get('SERC_y', -2)
SPP_y = kwargs.get('SPP_y', 1.75)
RFC_y = kwargs.get('RFC_y', -0.5)
fig, ax = plt.subplots(figsize=(8,3.5))
# set aspect to equal. This is done automatically
# when using *geopandas* plot on it's own, but not when
# working with pyplot directly.
ax.set_aspect('equal')
regions_proj.plot(ax=ax, column=data_col, cmap=cmap, legend=True,
vmin=vmin, vmax=vmax)
states_proj.plot(ax=ax, color='none', edgecolor=states_ec)
regions_proj.plot(ax=ax, color='none', edgecolor=regions_ec,
linewidth=regions_lw)
# plt.text(x=1.1, y=1.01, s=cbar_title, transform=ax.transAxes,
# ha='center', va='bottom', fontdict={'size':font_size})
plt.title(title)
for point, nerc in zip(regions_proj.centroid, regions_proj['nerc'].values):
text = regions_proj.loc[regions_proj['nerc']==nerc, text_col].values[0]
# text = '{}'.format(nerc, reduce)
x = point.x
y = point.y
if nerc == 'FRCC':
x = x + conv_lon(4.75)#-79
y = y - conv_lat(1)#28
rot = -67
plt.text(x, y, text, ha='center', va='center',
fontdict={'size':font_size})
elif nerc == 'NPCC':
x = x - conv_lon(1.5)
y = y + conv_lat(2.1)
plt.text(x, y, text, ha='center',
fontdict={'size':font_size})
elif nerc == 'SERC':
x = x + conv_lon(SERC_x)
y = y + conv_lat(SERC_y)
plt.text(x, y, text, ha='center', va='center',
bbox=dict(facecolor='white',
alpha=bbox_alpha,
boxstyle="square"),
fontdict={'size':font_size})
elif nerc == 'RFC':
# x = x + conv_lon(RFC_x)
y = y + conv_lat(RFC_y)
plt.text(x, y, text, ha='center', va='center',
bbox=dict(facecolor='white',
alpha=bbox_alpha,
boxstyle="square"),
fontdict={'size':font_size})
elif nerc == 'SPP':
# x = x + 2
y = y + conv_lat(SPP_y)
plt.text(x, y, text, ha='center', va='center',
bbox=dict(facecolor='white',
alpha=bbox_alpha,
boxstyle="square"),
fontdict={'size':font_size})
else:
plt.text(x, y, text, ha='center', va='center',
bbox=dict(facecolor='white',
alpha=bbox_alpha,
boxstyle="square"),
fontdict={'size':font_size})
sns.despine(left=True, bottom=True)
ax.set_yticklabels(labels=[])
ax.set_xticklabels(labels=[])
cax = fig.get_axes()[-1]
cax.set_title(cbar_title, fontdict={'size':font_size})
# https://gist.github.com/springmeyer/871897
def conv_lon(x):
newx = x * 20037508.34 / 180
return newx
def conv_lat(y):
newy = np.log(np.tan((90 + y) * np.pi / 360)) / (np.pi / 180)
newy *= 20037508.34 / 180
return newy
title = '2001 U.S. Average\n{} g CO$_2$/kWh'.format(usa_2001)
kwargs = dict(
regions_lw = 1,
regions_ec = '0.1',
SERC_y = -1.5,
SPP_y = 2.25
)
vmin = regions_albers.loc[:, [2001, 2017]].min().min()
vmax = regions_albers.loc[:, [2001, 2017]].max().max()
plot_nerc_annual(regions_proj=regions_albers, states_proj=states_albers, data_col=2001,
text_col='nerc', vmin=vmin, vmax=vmax, title=title,
cbar_title='g CO$_2$/kWh', **kwargs)
path = join(figure_path, 'NERC map_cividis_2001.png')
plt.savefig(path, bbox_inches='tight', dpi=350)
title = '2017 U.S. Average\n{} g CO$_2$/kWh (↓ 30%)'.format(usa_2017)
kwargs = dict(
regions_lw = 1,
regions_ec = '0.1',
SERC_y = -1.5,
SPP_y = 2.25
)
vmin = regions_albers.loc[:, [2001, 2017]].min().min()
vmax = regions_albers.loc[:, [2001, 2017]].max().max()
regions_albers['arrow reduction'] = '↓ ' + regions_albers['reduction']
plot_nerc_annual(regions_proj=regions_albers, states_proj=states_albers, data_col=2017,
text_col='arrow reduction', vmin=vmin, vmax=vmax, title=title,
cbar_title='g CO$_2$/kWh', **kwargs)
path = join(figure_path, 'NERC map_cividis_2017_change.png')
plt.savefig(path, bbox_inches='tight', dpi=350)
title = '2001 U.S. Average\n{} g CO$_2$/kWh'.format(usa_2001)
vmin = regions_albers.loc[:, [2001, 2017]].min().min()
vmax = regions_albers.loc[:, [2001, 2017]].max().max()
plot_nerc_annual(regions_proj=regions_albers, states_proj=states_albers, data_col=2001,
text_col='NERC', vmin=vmin, vmax=vmax, cmap='Reds',
title=title, cbar_title='g CO$_2$/kWh')
path = join(figure_path, 'NERC map_reds_2001.png')
plt.savefig(path, bbox_inches='tight', dpi=350)
title = '2017 U.S. Average\n{} g CO$_2$/kWh (↓ 30%)'.format(usa_2017)
vmin = regions_albers.loc[:, [2001, 2017]].min().min()
vmax = regions_albers.loc[:, [2001, 2017]].max().max()
regions_albers['arrow reduction'] = '↓ ' + regions_albers['reduction']
plot_nerc_annual(regions_proj=regions_albers, states_proj=states_albers, data_col=2017,
text_col='arrow reduction', vmin=vmin, vmax=vmax, cmap='Reds',
title=title, cbar_title='g CO$_2$/kWh')
path = join(figure_path, 'NERC map_reds_2017_change.png')
plt.savefig(path, bbox_inches='tight', dpi=350)
title = '2001 U.S. Average\n{} g CO$_2$/kWh'.format(usa_2001)
vmax = max(abs(regions_albers['2017 diff'].min()), regions_albers['2017 diff'].max())
vmin = -vmax
plot_nerc_annual(regions_proj=regions_albers, states_proj=states_albers, data_col='2001 diff',
text_col='NERC', cmap='PRGn_r', vmin=vmin, vmax=vmax,
title=title, cbar_title='g CO$_2$/kWh')
path = join(figure_path, 'NERC map_diverging_2001_2017_bounds.png')
plt.savefig(path, bbox_inches='tight', dpi=350)
title = '2017 U.S. Average\n{} g CO$_2$/kWh (↓ 30%)'.format(usa_2017)
vmax = max(abs(regions_albers['2017 diff'].min()), regions_albers['2017 diff'].max())
vmin = -vmax
regions_albers['arrow reduction'] = '↓ ' + regions_albers['reduction']
plot_nerc_annual(regions_proj=regions_albers, states_proj=states_albers, data_col='2017 diff',
text_col='arrow reduction', cmap='PRGn_r', vmin=vmin, vmax=vmax,
title=title, cbar_title='g CO$_2$/kWh')
path = join(figure_path, 'NERC map_diverging_2017.png')
plt.savefig(path, bbox_inches='tight', dpi=350)
title = '2017 U.S. Average\n{} g CO$_2$/kWh (↓ 30%)'.format(usa_2017)
vmax = max(abs(regions_albers['2017 diff'].min()), regions_albers['2017 diff'].max())
vmin = -vmax
regions_albers['arrow reduction'] = '↓ ' + regions_albers['reduction']
plot_nerc_annual(regions_proj=regions_albers, states_proj=states_albers, data_col='2017 diff',
text_col='arrow reduction', cmap='BrBG_r', vmin=vmin, vmax=vmax,
title=title, cbar_title='g CO$_2$/kWh')
path = join(figure_path, 'NERC map_diverging_BrBG_2017.png')
plt.savefig(path, bbox_inches='tight', dpi=350)
path = os.path.join(data_path, 'Facility gen fuels and CO2 2017-08-31.zip')
facility_df = pd.read_csv(path)
print(len(facility_df['plant id'].unique()), 'total plants')
print(len(facility_df.loc[facility_df['lat'].isnull(), 'plant id'].unique()),
'plants without lat/lon')
years = facility_df.loc[facility_df['lat'].isnull(), 'year'].unique()
for year in years:
total_gen = facility_df.loc[facility_df['year'] == year, 'generation (MWh)'].sum()
# Plant ids with no 'lat' in year
no_loc_plants = facility_df.loc[(facility_df['lat'].isnull()) &
(facility_df['year'] == year), 'plant id'].unique()
no_loc_gen = facility_df.loc[(facility_df['year'] == year) &
(facility_df['plant id'].isin(no_loc_plants)),
'generation (MWh)'].sum()
percent_dropped = no_loc_gen / total_gen * 100
print('In {}, {:.3f}% of generation is from plants without lat/lon'.format(year, percent_dropped))
facility_df.dropna(inplace=True, subset=['lat', 'lon'])
facility_df.columns
cols = ['lat', 'lon', 'plant id', 'year']
small_facility = facility_df.loc[:, cols].drop_duplicates()
geometry = [Point(xy) for xy in zip(small_facility.lon, small_facility.lat)]
# small_facility = small_facility.drop(['lon', 'lat'], axis=1)
crs = {'init': 'epsg:4326'}
geo_df = GeoDataFrame(small_facility, crs=crs, geometry=geometry)
geo_df.head()
len(geo_df)
facility_nerc = gpd.sjoin(geo_df, regions, how='inner', op='within')
facility_nerc.head()
facility_nerc = gpd.sjoin(df, geo_df, how="inner")
facility_nerc.head()
geometry = [Point(xy) for xy in zip(facility_nerc.lon, facility_nerc.lat)]
crs = {'init': 'epsg:4326'}
keep_cols = ['NERC_Label', 'plant id', 'year']
facility_nerc = GeoDataFrame(facility_nerc[keep_cols], crs=crs, geometry=geometry)
facility_nerc.head()
df_test = df.to_crs({'init': 'epsg:3395'})
df.plot()
df_test.plot()
# If loading the shapefile
shape_path = ''
regions = gpd.read_file(shape_path)
path = join(data_path, 'Facility labels', 'Facility locations.csv')
location = pd.read_csv(path)
len(location)
location.head()
location.loc[location['lat'].isnull()]
geometry = [Point(xy) for xy in zip(location.lon, location.lat)]
# small_facility = small_facility.drop(['lon', 'lat'], axis=1)
crs = {'init': 'epsg:4326'}
geo_df = GeoDataFrame(location, crs=crs, geometry=geometry)
geo_df.head()
len(geo_df)
len(geo_df['plant id'].unique())
df1 = gpd.sjoin(regions, geo_df)
df1.head()
len(df1)
df2 = gpd.sjoin(geo_df, regions, op='within')
df2.head()
len(df2)
df3 = gpd.sjoin(regions, geo_df, op='contains')
df3.head()
len(df3)
regions
frcc = regions.loc[1, 'geometry']
serc = regions.loc[5, 'geometry']
plant_641 = geo_df.loc[geo_df['plant id'] == 641, 'geometry'].values[0]
frcc.contains(plant_641)
serc.contains(plant_641)
frcc.intersects(serc)
len(df2.loc[df2['plant id'].duplicated()].sort_values('plant id'))
pd.set_option('display.max_rows', 200)
df2.loc[df2['plant id'].duplicated(keep=False)].sort_values('plant id')
facility_df.head()
plants = facility_df.loc[:, ['plant id', 'year', 'lat', 'lon']]
plants.drop_duplicates(inplace=True)
plants.groupby(['plant id']).max().hist()
path = join(data_path, 'EIA downloads', 'EIA-860 2015', '2___Plant_Y2015.xlsx')
nercs2015 = pd.read_excel(path, skiprows=0, parse_cols='C,L')
nercs2015.columns = ['plant id', 'nerc']
path = join(data_path, 'EIA downloads', 'eia8602012', 'PlantY2012.xlsx')
nercs2012 = pd.read_excel(path, skiprows=0, parse_cols='B,J')
nercs2012.columns = ['plant id', 'nerc']
nercs2012.head()
nercs = pd.concat([nercs2012, nercs2015]).drop_duplicates()
len(nercs), len(nercs2012), len(nercs2015)
df = pd.merge(plants, nercs, on=['plant id'], how='left')
plants.tail()
nercs.tail()
nercs.nerc.unique()
df.tail()
len(df.loc[df.isnull().any(axis=1), 'plant id'].unique())
from sklearn import neighbors
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
nercs = ['SERC', 'RFC', 'SPP', 'NPCC', 'WECC', 'MRO', 'TRE', 'FRCC']
df_slim = df.loc[df.nerc.isin(nercs), ['plant id', 'lat', 'lon', 'nerc']].dropna(subset=['lon']).drop_duplicates()
unknown = df_slim.loc[df_slim.nerc.isnull()]
n_neighbors = 10
X = df_slim.loc[df_slim.notnull().all(axis=1), ['lat', 'lon']]
y = df_slim.loc[df_slim.notnull().all(axis=1), 'nerc']
X_scale = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33, random_state=42)
y.unique()
knn = neighbors.KNeighborsClassifier()
params = {'weights': ['uniform', 'distance'],
'n_neighbors': [10, 15, 20, 30],
'leaf_size': [3, 5, 10, 30],
'p': [1, 2]}
clf = GridSearchCV(knn, params, n_jobs=-1)
clf.fit(X_train, y_train)
clf.best_estimator_, clf.best_params_, clf.best_score_
clf.score(X_scale, y)
clf = neighbors.KNeighborsClassifier(n_neighbors, weights='uniform')
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
clf = neighbors.KNeighborsClassifier(n_neighbors, weights='distance')
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold,
edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define Python functions for dY/dt and dZ/dt
Step2: <h3> <font color='firebrick'>Questions</font> </h3>
Step3: To Explore
Step4: Type 1 Coherent FFLs can act as a Sign-Sensitive Delays
|
<ASSISTANT_TASK:>
Python Code:
# import statements to make numeric and plotting functions available
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
## We'll specify the behavior of X as a series of pulse of different length
## so we'll define a function to generate pulses
def pulse(ontime, offtime, ntimes, onval=1):
if ontime >= offtime:
raise Exception("Invalid on/off times.")
signal = np.zeros(ntimes)
signal[ontime:offtime] = onval
return signal
nsteps = 150
short_pulse = pulse(20, 23, nsteps) # 5 sec pulse
long_pulse = pulse(50, 100, nsteps) # 50 sec pulse
X = short_pulse + long_pulse # we can then add the pulses to create
# a single time trace
plot(X, color='black')
xlabel('Time units')
ylabel('Amount of Gene Product')
ylim(0, 1.5)
pass
## Write Python functions for dY/dt and dZ/dt
def dY(B,K,a,X,Y):
pass ## replace this line with your function definition
def dZ(B,Kx,Ky,a,X,Y,Z):
pass ## replace this line with your function definition
def dY(B,K,a,X,Y):
if X > K:
theta = 1
else:
theta = 0
return B * theta - a * Y
def dZ(B,Kx,Ky,a,X,Y,Z):
theta = 0
if (X > Kx) and (Y > Ky):
theta = 1
return B * theta - a * Z
## Plot X, Y, and Z on the same time scale
nsteps = 150
short_pulse = pulse(20, 23, nsteps) # 5 sec pulse
long_pulse = pulse(50, 100, nsteps) # 50 sec pulse
X = short_pulse + long_pulse
# setup parameters for Y and Z
Y = [0]
betay, alphay = 0.2, 0.1
Kxy = 0.5
Z = [0]
betaz, alphaz = 0.2, 0.1
Kxz = 0.5
Kyz = 1
for i in range(nsteps):
xnow = X[i]
ynow, znow = Y[-1], Z[-1]
ynew = ynow + dY(betay, Kxy, alphay, xnow, ynow)
znew = znow + dZ(betaz, Kxz, Kyz, alphaz, xnow, ynow, znow)
Y.append(ynew)
Z.append(znew)
plot(X, 'k--', label='X', linewidth=1.5)
plot(Y, 'b', label='Y')
plot(Z, 'r', label='Z')
ylim(-0.1, 2.5)
xlabel("Time")
ylabel("Concentration")
legend()
pass
nsteps = 150
p1start = 10
p1duration = 5
p2start = 50
p2duration = 50
short_pulse = pulse(p1start, p1start + p1duration, nsteps) # short pulse
long_pulse = pulse(p2start, p2start + p2duration, nsteps) # long pulse
X = short_pulse + long_pulse
# change this `scale` argument to increase/decrease noise
noise = np.random.normal(loc=0, scale=0.2, size=nsteps) # mean=0, sd=0.2
X = X + noise
# setup parameters for Y and Z
Y = [0]
betay, alphay = 0.2, 0.1
Kxy = 0.5
Z = [0]
betaz, alphaz = 0.2, 0.1
Kxz = 0.5
Kyz = 1
for i in range(nsteps):
xnow = X[i]
ynow, znow = Y[-1], Z[-1]
ynew = ynow + dY(betay, Kxy, alphay, xnow, ynow)
znew = znow + dZ(betaz, Kxz, Kyz, alphaz, xnow, ynow, znow)
Y.append(ynew)
Z.append(znew)
# draw each trace as a subfigure
# subfigures stacked in a vertical grid
subplot2grid((3,1),(0,0))
plot(X, 'k', label='X', linewidth=1)
legend()
subplot2grid((3,1),(1,0))
plot(Y, 'b', label='Y', linewidth=2)
legend()
subplot2grid((3,1),(2,0))
plot(Z, 'r', label='Z', linewidth=2)
vlines(p1start, min(Z),max(Z)*1.1,color='black',linestyle='dashed')
annotate("pulse 1 on", xy=(p1start,1),xytext=(40,20),
textcoords='offset points',
horizontalalignment="center",
verticalalignment="bottom",
arrowprops=dict(arrowstyle="->",color='black',
connectionstyle='arc3,rad=0.5',
linewidth=1))
vlines(p2start, min(Z),max(Z)*1.1,color='black',linestyle='dashed')
annotate("pulse 2 on", xy=(p2start,1),xytext=(-40,0),
textcoords='offset points',
horizontalalignment="center",
verticalalignment="bottom",
arrowprops=dict(arrowstyle="->",color='black',
connectionstyle='arc3,rad=0.5',
linewidth=1))
legend()
pass
def Ton(alpha, KYratio):
return (1.0/alpha) * log(1.0/(1.0-KYratio))
## Create a contour plot for a range of alpha and Kyz/Yst
x = alpha = linspace(0.01, 0.2, 100)
y = KYratio = linspace(0.01, 0.99, 100)
X,Y = meshgrid(x, y)
Z = Ton(X,Y)
levels = MaxNLocator(nbins=20).tick_values(Z.min(), Z.max())
im = contourf(X,Y,Z, cmap=cm.inferno_r, levels=levels)
contour(X, Y, Z, levels,
colors=('k',),
linewidths=(0.5,))
colorbar(im)
xlabel('alpha')
ylabel("Kyz/Yst")
pass
## Write a Python function that represents dZ/dt for the Incoherent FFL
## our dY function previously defined stays the same
def dZ_incoh(B1,B2,Kx,Ky,a,X,Y,Z):
pass # define the function here
def dZ_incoh(B1,B2,Kx,Ky,a,X,Y,Z):
theta = 0
B = 0
if (X > Kx) and (Y < Ky):
theta = 1
B = B1
elif (X > Kx) and (Y >= Ky):
theta = 1
B = B2
return B * theta - a * Z
## Write your simulation here
nsteps = 150
short_pulse = pulse(20, 25, nsteps) # 5 sec pulse
long_pulse = pulse(50, 100, nsteps) # 50 sec pulse
X = short_pulse + long_pulse
# setup parameters for Y and Z
Y = [0]
betay, alphay = 0.2, 0.1
Kxy = 0.5
Z = [0]
betaz1, betaz2 = 0.2, 0.001
alphaz = 0.1
Kxz = 0.5
Kyz = 0.5
for i in range(nsteps):
xnow = X[i]
ynow, znow = Y[-1], Z[-1]
ynew = ynow + dY(betay, Kxy, alphay, xnow, ynow)
znew = znow + dZ_incoh(betaz1, betaz2, Kxz, Kyz, alphaz, xnow, ynow, znow)
Y.append(ynew)
Z.append(znew)
# draw each trace as a subfigure
# subfigures stacked in a vertical grid
subplot2grid((3,1),(0,0))
plot(X, 'k', label='X', linewidth=1)
legend()
ylim(0,1.1)
subplot2grid((3,1),(1,0))
plot(Y, 'b', label='Y', linewidth=2)
legend()
ylim(0,2.1)
subplot2grid((3,1),(2,0))
plot(Z, 'r', label='Z', linewidth=2)
legend()
ylim(0,0.7)
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load house sales data
Step2: Split data into training and testing
Step3: Useful SFrame summary functions
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Step9: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Step10: Residual Sum of Squares
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Step13: Predict the squarefeet given price
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Step15: New Model
Step16: Test your Linear Regression Algorithm
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
train_data,test_data = sales.random_split(.8,seed=0)
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
def simple_linear_regression(input_feature, output):
# compute the mean of input_feature and output
input_feature_mean = input_feature.mean()
output_mean = output.mean()
# compute the product of the output and the input_feature and its mean
product = output * input_feature
product_mean = product.mean()
# compute the squared value of the input_feature and its mean
input_feature_squered = input_feature * input_feature
input_feature_squered_mean = input_feature_squered.mean()
# use the formula for the slope
line1 = product.sum() - (output.sum() * input_feature.sum()) / input_feature.size()
line2 = input_feature_squered.sum() - (input_feature.sum() * input_feature.sum()) / input_feature.size()
slope = line1 / line2
# use the formula for the intercept
intercept = output_mean - (slope * input_feature_mean)
return (intercept, slope)
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = input_feature * slope + intercept
return predicted_values
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
predictions = get_regression_predictions(input_feature, intercept, slope)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
residuals = predictions - output
# square the residuals and add them up
RSS = (residuals * residuals).sum()
return(RSS)
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = (output - intercept) / slope
return estimated_feature
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
bedrooms_intercept, bedrooms_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
print bedrooms_intercept, bedrooms_slope
# Compute RSS when using bedrooms on TEST data:
rss_prices_on_bedrooms = get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], sqft_intercept, sqft_slope)
print 'The RSS when using bedrooms on TEST data : ' + str(rss_prices_on_sqft)
# Compute RSS when using squarfeet on TEST data:
rss_prices_on_sqft = get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
print 'The RSS when using squarfeet on TEST data : ' + str(rss_prices_on_sqft)
rss_prices_on_bedrooms > rss_prices_on_sqft
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step2: Here we load the DataFrame from the previous notebook. Note that there is an additional column cat.
Step3: Test Proportions and Effect Size
Step4: Word clouds
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import re
import numpy as np
import dbpedia_config
from scipy.stats import chisquare
target_folder = dbpedia_config.TARGET_FOLDER
apost = re.compile('_s$')
female_pmi = pd.read_csv('{0}/top-200-pmi-female.csv'.format(target_folder), encoding='utf-8')
female_pmi.word = female_pmi.word.map(lambda x: apost.sub('\'s', x))
female_pmi.head()
female_pmi.cat.value_counts() / female_pmi.shape[0] * 100.0
male_pmi = pd.read_csv('{0}/top-200-pmi-male.csv'.format(target_folder), encoding='utf-8')
male_pmi.word = male_pmi.word.map(lambda x: apost.sub('\'s', x))
male_pmi.head()
male_pmi.cat.value_counts() / male_pmi.shape[0] * 100.0
m_proportions = []
f_proportions = []
m_count = male_pmi.cat.value_counts() / male_pmi.shape[0] * 100.0
f_count = female_pmi.cat.value_counts() / female_pmi.shape[0] * 100.0
for c in ('F', 'G', 'O', 'R'):
m_proportions.append(m_count[c] if c in m_count.index else 0.0)
f_proportions.append(f_count[c] if c in f_count.index else 0.0)
m_proportions, f_proportions
chisquare(m_proportions, f_proportions)
p0 = np.array(m_proportions)
p1 = np.array(f_proportions)
np.sqrt(np.sum(np.power(p1 - p0, 2) / p1))
import matta
matta.init_javascript(path='https://rawgit.com/carnby/matta/master/matta/libs')
matta.wordcloud(dataframe=female_pmi.loc[:, ('word', 'pmi_female', 'cat')], text='word',
typeface='Lato', font_weight='bold',
font_size={'value': 'pmi_female'},
font_color={'palette': 'Set2', 'n_colors': 4, 'value': 'cat', 'scale': 'ordinal'})
matta.wordcloud(dataframe=male_pmi.loc[:, ('word', 'pmi_male', 'cat')], text='word',
typeface='Lato', font_weight='bold',
font_size={'value': 'pmi_male'},
font_color={'palette': 'Set2', 'n_colors': 4, 'value': 'cat', 'scale': 'ordinal'})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Para que o notebook IPython coloque as figuras geradas pela matplotlib inline
Step2: Bibliotecas
Step3: <font size="3" face="Times"><h1 style="color
Step4: <font size="3" face="Times"><h1 style="color
Step5: <font size="3" face="Times"><h1 style="color
Step6: <font size="3" face="Times"><h1 style="color
Step7: <font size="3" face="Times"><h1 style="color
Step8: <font size="3" face="Times"><h1 style="color
|
<ASSISTANT_TASK:>
Python Code:
# encoding: utf-8
%matplotlib inline
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
focos = gpd.read_file(r"C:\Users\dougl\Desktop\programacao\focos_2016\focos_2016.shp")
focos["timestamp"] = pd.to_datetime(focos["DataHora"])
focos_tocantins = focos[focos.Estado == 'Tocantins']
focos_setembro = focos_tocantins[focos.timestamp.dt.month == 9]
focos_setembro.head(4)
municipios = gpd.read_file(r"C:\Users\dougl\Desktop\programacao\focos_2016\municipios\municipios.shp")
#17 é o código IBGE para o Estado de Tocantins
municipios_to = municipios[municipios["GEOCODIGO"].str[0:2] == "17"]
focos_setembro.crs = {'init': 'epsg:4674'}
municipios_to.crs = {'init': 'epsg:4674'}
tocantins_dados = gpd.sjoin(focos_setembro, municipios_to, how="right", op='intersects')
tocantins_dados.head(4)
municipios_tocantins = municipios_to.set_index('NOME')
focos_mes = tocantins_dados.groupby('NOME').Municipi.count()
focos_mes.index.name = "Municipios"
focos_mes.name = "focos"
focos_mes.head()
focos_municipios = pd.concat([municipios_tocantins, focos_mes], axis=1)
focos_municipios.head()
fig, ax = plt.subplots(1, figsize=(10, 15))
focos_municipios.plot(column='focos', ax=ax, cmap='Paired')
municipios_tocantins.plot(color='None', edgecolor='black', linewidth=1, ax=ax)
ax.axis('off')
vmin = focos_municipios['focos'].min()
vmax = focos_municipios['focos'].max()
# criando barra de cores
barra = plt.cm.ScalarMappable(cmap='Paired', norm=plt.Normalize(vmin=vmin, vmax=vmax))
barra._A = []
cbar = fig.colorbar(barra)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import sys ;
CHAR_BIT = 8 ;
INT_BIT = sys . getsizeof(int() ) ;
def Min(x , y ) :
return y +(( x - y ) &(( x - y ) >>(INT_BIT * CHAR_BIT - 1 ) ) ) ;
def Max(x , y ) :
return x -(( x - y ) &(( x - y ) >>(INT_BIT * CHAR_BIT - 1 ) ) ) ;
x = 15 ;
y = 6 ;
print("Minimum ▁ of ", x , "and ", y , "is ", Min(x , y ) ) ;
print("Maximum ▁ of ", x , "and ", y , "is ", Max(x , y ) ) ;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The MNIST dataset
Step2: Every MNIST sample has two parts
Step3: Inputs
Step4: Our first classification model
Step5: Learning the model parameters from data
Step6: The loss $L$ is usually approximated on a batch of images.
Step7: Trainig/testing loop
Step8: Training #1
Step9: The obtained classification accuracy on the test set should be between 91 and 93 percent.
Step10: Training #2
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
tf.set_random_seed(1337)
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import matplotlib.pyplot as plt
def show_sample(index):
image = mnist.train.images[index].reshape(28, 28) # 784 -> 28x28
label = mnist.train.labels[index]
plt.imshow(image, cmap='Greys')
plt.show()
plt.clf()
plt.cla()
plt.close()
print('label[%d]: %s' % (index, str(label)))
show_sample(10)
print('------------------------------------------------------------')
show_sample(24)
print('------------------------------------------------------------')
show_sample(12)
print('------------------------------------------------------------')
show_sample(11)
print('------------------------------------------------------------')
show_sample(18)
print('------------------------------------------------------------')
def build_inputs():
#
def build_affine(x):
# x*W + b
def build_loss(logits, y):
#
def build_ncorrect(logits, y):
#
def get_accuracy(ncorrect):
#
def run_training_loop(step, accuracy, batchsize, niters):
#
# prepare computation graphs
# (insert code below)
# final preparations for learning
sess = tf.Session()
tf.global_variables_initializer().run(session=sess)
# start the learning process: batch size=100, number of iterations=10000
# (insert code below)
# clear the current session (so we can start another one later)
sess.close()
tf.reset_default_graph()
def build_convnet(x):
#
# inputs and model outputs
x, y = build_inputs()
logits = build_convnet(x)
# loss-computation grah
loss = build_loss(logits, y)
# testing-accuracy graph
ncorrect = build_ncorrect(logits, y)
# we use RMSProp to gradually tune the model parameters (similar to SGD, but better in most cases)
step = tf.train.RMSPropOptimizer(1e-3).minimize(loss)
# final preparations for learning
sess = tf.Session()
tf.global_variables_initializer().run(session=sess)
# start the learning process: batch size=100, number of iterations=5000
run_training_loop(step, ncorrect, 100, 5001)
# clear the current session (so we can start another one later)
sess.close()
tf.reset_default_graph()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Defining some utilitary functions and importing some modules
Step2: Query Gaia tables in VizieR
Step3: By default, output is limited to 50 rows. Let's change this and redo the query.
Step4: We add now a constraint to keep only sources with a parallax > 5 mas
Step5: Send result to VO tools through SAMP
Step6: Query RRLyrae table and retrieve a light curve
Step7: Retrieve light curve points for Source = 4660711867541110784
Step8: Plot the light curve
Step9: Fold the light curve, folded by the period (P1 = 0.21271385 d)
Step11: Recreate HR diagram from Gaia DR1 paper
Step12: Plot the HR diagram
Step13: Find Gaia couterparts for a list of AllWISEsources and draw a color-color diagram
Step14: Cross-match with Gaia DR1
Step15: Create color-color plot
Step16: Retrieve Gaia sources within HST and SDSS coverage
Step17: Compute intersection
Step18: Retrieve Gaia DR1 sources in the coverage of the computed MOC
Step20: Query SIMBAD to retrieve stars member of a cluster and cross-match with TGAS
Step21: We now query the xmatch service to find counterparts in TGAS (max search radius
|
<ASSISTANT_TASK:>
Python Code:
def query_TAP(tap_endpoint, adql_query, table_to_upload=None):
Query a TAP service (designated by its tap_endpoint)
with a given ADQL query
Query is performed synchronously
Return an AstroPy Table object
import requests
from astropy.table import Table
from astropy.io.votable import parse_single_table
import os
import tempfile
import warnings
r = requests.post(tap_endpoint + '/sync', data={'query': adql_query, 'request': 'doQuery', 'lang': 'adql', 'format': 'votable', 'phase': 'run'})
with warnings.catch_warnings():
warnings.simplefilter("ignore")
tmp_vot = tempfile.NamedTemporaryFile(delete = False)
with open(tmp_vot.name, 'w') as h:
for line in r.iter_lines():
if line:
h.write(line.decode(r.encoding)+'\n')
table = parse_single_table(tmp_vot.name).to_table()
# finally delete temp files
os.unlink(tmp_vot.name)
return table
from astroquery.xmatch import XMatch
import types
import six
from astropy.io import ascii
# monkey patching XMatch
def patched_is_table_available(self, table_id):
if isinstance(table_id, six.string_types) and (table_id[:7] == 'vizier:'):
table_id = table_id[7:]
if not isinstance(table_id, six.string_types):
return False
return table_id in self.get_available_tables()
XMatch.is_table_available = types.MethodType(patched_is_table_available, XMatch)
from astropy import units as u
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib
def plot_scatter_density(xdata, ydata, xlabel, ylabel, title, xlim=None, ylim=None, cmap='viridis', invert_yaxis = True, s=2, grid=False):
from scipy.stats import gaussian_kde
import numpy as np
x = np.reshape(np.array(xdata, copy=False).astype('float'), (len(xdata)))
y = np.reshape(np.array(ydata, copy=False).astype('float'), len(ydata))
xy = np.vstack([x,y])
z = gaussian_kde(xy)(xy)
# Sort the points by density, so that the densest points are plotted last
idx = z.argsort()
x, y, z = x[idx], y[idx], z[idx]
w = h = 8
fig, ax = plt.subplots(figsize = (w, h))
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
if xlim:
ax.set_xlim(*xlim)
if ylim:
ax.set_ylim(*ylim)
ax.set_title(title)
ax.scatter(x, y, c=z, s=s, edgecolor='', cmap=cmap)
if invert_yaxis:
ax.invert_yaxis()
ax.grid(grid)
# disable warnings as not to clutter notebook
import warnings
warnings.simplefilter("ignore")
from astroquery.vizier import Vizier
result = Vizier.query_region('Messier 44', radius='0d20m0s', catalog='I/337/gaia')
print(result[0])
Vizier.ROW_LIMIT = 100000
result = Vizier.query_region('Messier 44', radius='0d20m0s', catalog='I/337/gaia')
print(result[0]['RA_ICRS', 'DE_ICRS', 'Source', 'Plx'])
result = Vizier(column_filters={"Plx":">5"}).query_region('Messier 44', radius='0d20m0s', catalog='I/337/gaia', )
print(result[0]['RA_ICRS', 'DE_ICRS', 'Source', 'Plx'])
# TODO
rrlyrae = Vizier.get_catalogs(catalog='I/337/rrlyrae')[0]
rrlyrae.sort('P1')
print(rrlyrae['Source', 'P1', 'RA_ICRS', 'DE_ICRS'])
lc_points = Vizier(column_filters={"Source":"4660711867541110784"}).get_catalogs(catalog='I/337/fov')[0]
print(lc_points['ObsTime', 'FG'])
x = lc_points['ObsTime']
y = lc_points['FG']
y_err = lc_points['e_FG']
plt.scatter(x, y, color='r')
plt.errorbar(x, y, yerr=y_err, color='r', linestyle='None')
plt.grid(True)
period = 0.21271385
#(time+8)%16 - 8
x = (lc_points['ObsTime'] % period) / period
y = lc_points['FG']
y_err = lc_points['e_FG']
plt.scatter(x, y, color='r')
plt.errorbar(x, y, yerr=y_err, color='r', linestyle='None')
plt.grid(True)
adql_query = SELECT gaia.ra, gaia.dec, gaia.source_id, gaia.hip, gaia.phot_g_mean_mag+5*log10(gaia.parallax)-10 as g_mag_abs,
hip."B-V"
FROM "I/337/tgas" as gaia
inner join "I/311/hip2" as hip
on gaia.hip= hip.HIP
where gaia.parallax/gaia.parallax_error >= 5 and hip."e_B-V" > 0.0 and hip."e_B-V" <= 0.05 and
2.5/log(10)*gaia.phot_g_mean_flux_error/gaia.phot_g_mean_flux <= 0.05
tap_vizier_endpoint = 'http://tapvizier.u-strasbg.fr/TAPVizieR/tap/'
hr_data = query_TAP(tap_vizier_endpoint, adql_query)
print(hr_data)
plot_scatter_density(hr_data['B-V'], hr_data['g_mag_abs'],
title='Gaia DR1 HR diagram', xlabel = 'B-V', ylabel=r'$M_G$',
cmap='viridis', invert_yaxis = True, s=6, grid=True)
from astropy.io import votable
allwise_sources = votable.parse_single_table('allwise-SMC-sample.vot').to_table()
print(len(allwise_sources))
allwise_X_gaia = XMatch.query(cat1 = allwise_sources,
cat2 = 'vizier:I/337/gaia',
max_distance = 1 * u.arcsec,
colRA1 = 'RAJ2000',
colDec1= 'DEJ2000', cache=False)
print(len(allwise_X_gaia))
plot_scatter_density(allwise_X_gaia['phot_g_mean_mag'] - allwise_X_gaia['W1mag'],
allwise_X_gaia['W1mag'] - allwise_X_gaia['W4mag'], title='Color-color plot',
xlabel = 'G-W1', ylabel='W1-W4',
cmap='viridis', invert_yaxis = True, s=6, grid=True)
from mocpy import MOC
moc_sdss = MOC.from_ivorn('ivo://CDS/P/SDSS9/u', nside=512)
moc_hst = MOC.from_ivorn('ivo://CDS/P/HST/R', nside=512)
moc_sdss.plot('SDSS coverage')
moc_hst.degrade_to_order(6).plot('HST coverage')
common_coverage = moc_sdss.intersection(moc_hst)
print('Common coverage represents %.5f%% of the sky' % (common_coverage.sky_fraction))
common_coverage.degrade_to_order(6).plot('Common coverage between SDSS and HST')
gaia_sources = common_coverage.query_vizier_table('GAIA-DR1', max_rows=200000)
print(gaia_sources['ra', 'dec'])
stars_in_cluster = query_TAP('http://simbad.u-strasbg.fr/simbad/sim-tap',
SELECT ra, dec, pmra, pmdec, plx_value, plx_err FROM basic
WHERE otype_txt = '*iC'
and plx_value>1 and plx_value/plx_err > 5)
print(stars_in_cluster)
xmatch_result = XMatch.query(cat1 = stars_in_cluster,
cat2 = 'vizier:I/337/tgasptyc',
max_distance = 1 * u.arcsec,
colRA1 = 'ra',
colDec1= 'dec', cache=False)
print(xmatch_result['ra', 'dec', 'plx_value', 'Plx', 'plx_err', 'e_Plx'])
histvals, binvals, patches = plt.hist(xmatch_result['Plx'], bins=100, facecolor='g', alpha=0.75)
plt.xlabel('Plx')
plt.ylabel('count')
plt.title('Gaia parallax distribution for cluster members')
plt.grid(True)
histvals, binvals, patches = plt.hist(100 * abs(xmatch_result['Plx'] - xmatch_result['plx_value']) / xmatch_result['Plx'], bins=100, facecolor='g', alpha=0.75)
plt.xlabel('Percentage')
plt.ylabel('count')
plt.title('Relative difference between parallax in Simbad and parallax measures by Gaia')
plt.grid(True)
plot_scatter_density(xmatch_result['ra'] - xmatch_result['_RAJ2000'],
xmatch_result['dec'] - xmatch_result['_DEJ2000'],
r'$\Delta$RA', r'$\Delta$dec', 'Position differences between Simbad and Gaia', xlim=[-5e-5, 5e-5], ylim=[-5e-5, 5e-5],
cmap='viridis', invert_yaxis = False, s=6, grid=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot Functions
Step2: Bayesian optimization
Step3: Generate Initial Data
Step4: Define Model
Step5: Define Acquisition Function
Step6: Create OuterLoop
Step7: Add Plotting of Acquisition Function
Step8: Find Esimated Minimum at Every Iteration
Step9: Run Optimization
Step10: Plot Final Model
Step11: Comparison to High Fidelity Only Bayesian Optimization
Step12: Plot Estimated Minimum Location
|
<ASSISTANT_TASK:>
Python Code:
# Load function
import emukit.test_functions.forrester
# The multi-fidelity Forrester function is already wrapped as an Emukit UserFunction object in
# the test_functions package
forrester_fcn, _ = emukit.test_functions.forrester.multi_fidelity_forrester_function()
forrester_fcn_low = forrester_fcn.f[0]
forrester_fcn_high = forrester_fcn.f[1]
# Assign costs
low_fidelity_cost = 1
high_fidelity_cost = 5
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams.update({'font.size': 16})
FIG_SIZE = (12, 8)
import numpy as np
np.random.seed(12345)
from emukit.multi_fidelity.convert_lists_to_array import convert_x_list_to_array
n_plot_points = 100
x_plot = np.linspace(0, 1, 500)[:, None]
y_plot_low = forrester_fcn_low(x_plot)
y_plot_high = forrester_fcn_high(x_plot)
plt.figure(figsize=FIG_SIZE)
plt.plot(x_plot, y_plot_low, 'b')
plt.plot(x_plot, y_plot_high, 'r')
plt.legend(['Low fidelity', 'High fidelity'])
plt.xlim(0, 1)
plt.title('High and low fidelity Forrester functions')
plt.xlabel('x')
plt.ylabel('y');
plt.show()
from emukit.core import ParameterSpace, ContinuousParameter, InformationSourceParameter
n_fidelities = 2
parameter_space = ParameterSpace([ContinuousParameter('x', 0, 1), InformationSourceParameter(n_fidelities)])
x_low = np.random.rand(12)[:, None]
x_high = x_low[:6, :]
y_low = forrester_fcn_low(x_low)
y_high = forrester_fcn_high(x_high)
from emukit.multi_fidelity.models.linear_model import GPyLinearMultiFidelityModel
import GPy
from emukit.multi_fidelity.kernels.linear_multi_fidelity_kernel import LinearMultiFidelityKernel
from emukit.multi_fidelity.convert_lists_to_array import convert_xy_lists_to_arrays
from emukit.model_wrappers import GPyMultiOutputWrapper
from GPy.models.gp_regression import GPRegression
x_array, y_array = convert_xy_lists_to_arrays([x_low, x_high], [y_low, y_high])
kern_low = GPy.kern.RBF(1)
kern_low.lengthscale.constrain_bounded(0.01, 0.5)
kern_err = GPy.kern.RBF(1)
kern_err.lengthscale.constrain_bounded(0.01, 0.5)
multi_fidelity_kernel = LinearMultiFidelityKernel([kern_low, kern_err])
gpy_model = GPyLinearMultiFidelityModel(x_array, y_array, multi_fidelity_kernel, n_fidelities)
gpy_model.likelihood.Gaussian_noise.fix(0.1)
gpy_model.likelihood.Gaussian_noise_1.fix(0.1)
model = GPyMultiOutputWrapper(gpy_model, 2, 5, verbose_optimization=False)
model.optimize()
x_plot_low = np.concatenate([np.atleast_2d(x_plot), np.zeros((x_plot.shape[0], 1))], axis=1)
x_plot_high = np.concatenate([np.atleast_2d(x_plot), np.ones((x_plot.shape[0], 1))], axis=1)
def plot_model(x_low, y_low, x_high, y_high):
mean_low, var_low = model.predict(x_plot_low)
mean_high, var_high = model.predict(x_plot_high)
plt.figure(figsize=FIG_SIZE)
def plot_with_error_bars(x, mean, var, color):
plt.plot(x, mean, color=color)
plt.fill_between(x.flatten(), mean.flatten() - 1.96*var.flatten(), mean.flatten() + 1.96*var.flatten(),
alpha=0.2, color=color)
plot_with_error_bars(x_plot_high[:, 0], mean_low, var_low, 'b')
plot_with_error_bars(x_plot_high[:, 0], mean_high, var_high, 'r')
plt.plot(x_plot, forrester_fcn_high(x_plot), 'k--')
plt.scatter(x_low, y_low, color='b')
plt.scatter(x_high, y_high, color='r')
plt.legend(['Low fidelity model', 'High fidelity model', 'True high fidelity'])
plt.title('Low and High Fidelity Models')
plt.xlim(0, 1)
plt.xlabel('x')
plt.ylabel('y');
plt.show()
plot_model(x_low, y_low, x_high, y_high)
from emukit.bayesian_optimization.acquisitions.entropy_search import MultiInformationSourceEntropySearch
from emukit.core.acquisition import Acquisition
# Define cost of different fidelities as acquisition function
class Cost(Acquisition):
def __init__(self, costs):
self.costs = costs
def evaluate(self, x):
fidelity_index = x[:, -1].astype(int)
x_cost = np.array([self.costs[i] for i in fidelity_index])
return x_cost[:, None]
@property
def has_gradients(self):
return True
def evaluate_with_gradients(self, x):
return self.evalute(x), np.zeros(x.shape)
cost_acquisition = Cost([low_fidelity_cost, high_fidelity_cost])
acquisition = MultiInformationSourceEntropySearch(model, parameter_space) / cost_acquisition
from emukit.core.loop import FixedIntervalUpdater, OuterLoop, SequentialPointCalculator
from emukit.core.loop.loop_state import create_loop_state
from emukit.core.optimization.multi_source_acquisition_optimizer import MultiSourceAcquisitionOptimizer
from emukit.core.optimization import GradientAcquisitionOptimizer
initial_loop_state = create_loop_state(x_array, y_array)
acquisition_optimizer = MultiSourceAcquisitionOptimizer(GradientAcquisitionOptimizer(parameter_space), parameter_space)
candidate_point_calculator = SequentialPointCalculator(acquisition, acquisition_optimizer)
model_updater = FixedIntervalUpdater(model)
loop = OuterLoop(candidate_point_calculator, model_updater, initial_loop_state)
def plot_acquisition(loop, loop_state):
colours = ['b', 'r']
plt.plot(x_plot_low[:, 0], loop.candidate_point_calculator.acquisition.evaluate(x_plot_low), 'b')
plt.plot(x_plot_high[:, 0], loop.candidate_point_calculator.acquisition.evaluate(x_plot_high), 'r')
previous_x_collected = loop_state.X[[-1], :]
fidelity_idx = int(previous_x_collected[0, -1])
plt.scatter(previous_x_collected[0, 0],
loop.candidate_point_calculator.acquisition.evaluate(previous_x_collected),
color=colours[fidelity_idx])
plt.legend(['Low fidelity', 'High fidelity'], fontsize=12)
plt.title('Acquisition Function at Iteration ' + str(loop_state.iteration))
plt.xlabel('x')
plt.xlim(0, 1)
plt.ylabel('Acquisition Value')
plt.tight_layout()
plt.show()
loop.iteration_end_event.append(plot_acquisition)
x_search = np.stack([np.linspace(0, 1, 1000), np.ones(1000)], axis=1)
model_min_mean = []
model_min_loc = []
def calculate_metrics(loop, loop_state):
mean, var = loop.model_updaters[0].model.predict(x_search)
model_min_mean.append(np.min(mean))
model_min_loc.append(x_search[np.argmin(mean), 0])
# subscribe to event
loop.iteration_end_event.append(calculate_metrics)
loop.run_loop(forrester_fcn, 10)
is_high_fidelity = loop.loop_state.X[:, -1] == 1
plot_model(x_low=loop.loop_state.X[~is_high_fidelity, 0], y_low=loop.loop_state.Y[~is_high_fidelity],
x_high=loop.loop_state.X[is_high_fidelity, 0], y_high=loop.loop_state.Y[is_high_fidelity])
from emukit.bayesian_optimization.loops import BayesianOptimizationLoop
from emukit.bayesian_optimization.acquisitions.entropy_search import EntropySearch
from emukit.model_wrappers import GPyModelWrapper
import GPy
# Make model
gpy_model = GPy.models.GPRegression(x_high, y_high)
gpy_model.Gaussian_noise.variance.fix(0.1)
hf_only_model = GPyModelWrapper(gpy_model)
# Create loop
hf_only_space = ParameterSpace([ContinuousParameter('x', 0, 1)])
hf_only_acquisition = EntropySearch(hf_only_model, hf_only_space)
hf_only_loop = BayesianOptimizationLoop(hf_only_space, hf_only_model, hf_only_acquisition)
# Calculate best guess at minimum at each iteration of loop
hf_only_model_min_mean = []
x_search = np.linspace(0, 1, 1000)[:, None]
hf_only_model_min_loc = []
def calculate_metrics(loop, loop_state):
mean, var = loop.model_updaters[0].model.predict(x_search)
hf_only_model_min_mean.append(np.min(mean))
hf_only_model_min_loc.append(x_search[np.argmin(mean)])
# subscribe to event
hf_only_loop.iteration_end_event.append(calculate_metrics)
# Run optimization
hf_only_loop.run_loop(forrester_fcn_high, 10)
# Plot comparison
plt.figure(figsize=FIG_SIZE)
x = np.array(range(len(model_min_mean))) + 1
# Calculate cumulative cost of evaluating high fidelity only observations
n_hf_points = hf_only_loop.loop_state.X.shape[0]
cumulative_cost_hf = high_fidelity_cost * (np.array(range(n_hf_points)) + 1)
cumulative_cost_hf = cumulative_cost_hf[x_high.shape[0]:]
# Calculate cumulative cost of evaluating multi-fidelity observations
cost_mf = cost_acquisition.evaluate(loop.loop_state.X)
cumulative_cost_mf = np.cumsum(cost_mf)
cumulative_cost_mf = cumulative_cost_mf[x_array.shape[0]:]
x_min = np.min([cumulative_cost_hf, cumulative_cost_mf])
x_max = np.max([cumulative_cost_hf, cumulative_cost_mf])
plt.plot(cumulative_cost_hf, hf_only_model_min_loc, 'm', marker='x', markersize=16)
plt.plot(cumulative_cost_mf, model_min_loc, 'c', marker='.', markersize=16)
plt.hlines(x_search[np.argmin(forrester_fcn_high(x_search))], x_min, x_max, color='k', linestyle='--')
plt.legend(['High fidelity only optimization', 'Multi-fidelity only optimization', 'True minimum'])
plt.title('Comparison of Multi-Fidelity and High Fidelity Only Optimizations')
plt.ylabel('Estimated Location of Minimum')
plt.xlabel('Cumulative Cost of Evaluting Objective');
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here, we follow the reasoning presented by Webster (1904) for analyzing the ellipsoidal coordinate $\lambda$ describing a prolate ellipsoid.
Step2: By using these constants, we calculate the coefficients $p_{2}$ (equation 5), $p_{1}$ (equation 6) and $p_{0}$ (equation 7) as follows
Step3: In the sequence, we define a set of values for the variable $\rho$ in an interval $\left[ \rho_{min} \, , \rho_{max} \right]$ and evaluate the quadratic equation $f(\rho)$ (equation 4).
Step4: Finally, the cell below shows the quadratic equation $f(\rho)$ (equation 4) evaluated in the range $\left[ \rho_{min} \, , \rho_{max} \right]$ defined above.
Step5: Remember that we are interested in a $\rho$ satisfying equation 3. Consequently, according to the figures shown above, we are interested in the largest root $\lambda$ of the quadratic equation $f(\rho)$ (equation 4).
Step6: By substituing $\lambda$ in equation 4, we can verify that it is a root of $f(\rho)$.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
a = 20.
b = 13.
x = 21.
y = 23.
z = 30.
p2 = 1.
p1 = a**2 + b**2 - (x**2) - (y**2) - (z**2)
p0 = (a*b)**2 - (b*x)**2 - (a*y)**2 - (a*z)**2
rho_min = -a**2 - 100.
rho_max = -b**2 + 2000.
rho = np.linspace(rho_min, rho_max, 100)
f = p2*(rho**2) + p1*rho + p0
ymin = np.min(f) - 0.1*(np.max(f) - np.min(f))
ymax = np.max(f) + 0.1*(np.max(f) - np.min(f))
plt.close('all')
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot([rho_min, rho_max], [0., 0.], 'k-')
plt.plot([-a**2, -a**2], [ymin, ymax], 'r--', label = '$-a^{2}$')
plt.plot([-b**2, -b**2], [ymin, ymax], 'g--', label = '$-b^{2}$')
plt.plot(rho, f, 'k-', linewidth=2.)
plt.xlim(rho_min, rho_max)
plt.ylim(ymin, ymax)
plt.legend(loc = 'best')
plt.xlabel('$\\rho$', fontsize = 20)
plt.ylabel('$f(\\rho)$', fontsize = 20)
plt.subplot(1,2,2)
plt.plot([rho_min, rho_max], [0., 0.], 'k-')
plt.plot([-a**2, -a**2], [ymin, ymax], 'r--', label = '$-a^{2}$')
plt.plot([-b**2, -b**2], [ymin, ymax], 'g--', label = '$-b^{2}$')
plt.plot(rho, f, 'k-', linewidth=2.)
plt.xlim(rho_min, 0.)
plt.ylim(-0.5*10**6, 10**5)
plt.legend(loc = 'best')
plt.xlabel('$\\rho$', fontsize = 20)
#plt.ylabel('$f(\\rho)$', fontsize = 20)
plt.tight_layout()
plt.show()
delta = p1**2 - 4.*p2*p0
lamb = (-p1 + np.sqrt(delta))/(2.*p2)
print 'lambda = %.5f' % lamb
f_lamb = p2*(lamb**2) + p1*lamb + p0
print 'f(lambda) = %.5f' % f_lamb
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Necessary packages and functions call
Step2: Data loading & Select source, target, validation datasets
Step3: Data preprocessing
Step4: Run DVRL
Step5: Evaluations
|
<ASSISTANT_TASK:>
Python Code:
# Uses pip3 to install necessary package (lightgbm)
!pip3 install lightgbm
# Resets the IPython kernel to import the installed package.
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import os
from git import Repo
# Current working directory
repo_dir = os.getcwd() + '/repo'
if not os.path.exists(repo_dir):
os.makedirs(repo_dir)
# Clones github repository
if not os.listdir(repo_dir):
git_url = "https://github.com/google-research/google-research.git"
Repo.clone_from(git_url, repo_dir)
import numpy as np
import tensorflow as tf
import lightgbm
# Sets current directory
os.chdir(repo_dir)
from dvrl.data_loading import load_rossmann_data, preprocess_data
from dvrl import dvrl
from dvrl.dvrl_metrics import learn_with_dvrl, learn_with_baseline
# The number of source / validation / target samples (79%/1%/20%)
dict_no = dict()
dict_no['source'] = 667027 # 79% of data
dict_no['valid'] = 8443 # 1% of data
# Selects a setting and target store type
setting = 'train-on-rest'
target_store_type = 'B'
# Loads data and selects source, target, validation datasets
load_rossmann_data(dict_no, setting, target_store_type)
print('Finished data loading.')
# Normalization methods: either 'minmax' or 'standard'
normalization = 'minmax'
# Extracts features and labels. Then, normalizes features.
x_source, y_source, x_valid, y_valid, x_target, y_target, _ = \
preprocess_data(normalization, 'source.csv', 'valid.csv', 'target.csv')
print('Finished data preprocess.')
# Resets the graph
tf.reset_default_graph()
# Defines the problem
problem = 'regression'
# Network parameters
parameters = dict()
parameters['hidden_dim'] = 100
parameters['comb_dim'] = 10
parameters['iterations'] = 1000
parameters['activation'] = tf.nn.tanh
parameters['layer_number'] = 5
parameters['batch_size'] = 50000
parameters['learning_rate'] = 0.001
# Defines predictive model
pred_model = lightgbm.LGBMRegressor()
# Sets checkpoint file name
checkpoint_file_name = './tmp/model.ckpt'
# Defines flag for using stochastic gradient descent / pre-trained model
flags = {'sgd': False, 'pretrain': False}
# Initializes DVRL
dvrl_class = dvrl.Dvrl(x_source, y_source, x_valid, y_valid, problem, pred_model, parameters, checkpoint_file_name, flags)
# Trains DVRL
dvrl_class.train_dvrl('rmspe')
# Estimates data values
dve_out = dvrl_class.data_valuator(x_source, y_source)
# Predicts with DVRL
y_target_hat = dvrl_class.dvrl_predictor(x_target)
print('Finished data valuation.')
# Defines evaluation model
eval_model = lightgbm.LGBMRegressor()
# DVRL-weighted learning
dvrl_perf = learn_with_dvrl(dve_out, eval_model,
x_source, y_source, x_valid, y_valid, x_target, y_target, 'rmspe')
# Baseline prediction performance (treat all training samples equally)
base_perf = learn_with_baseline(eval_model, x_source, y_source, x_target, y_target, 'rmspe')
print('Finished evaluation.')
print('DVRL learning performance: ' + str(np.round(dvrl_perf, 4)))
print('Baseline performance: ' + str(np.round(base_perf, 4)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: S be carefull!
Step2: Queez
Step3: Control Statments
Step4: tuple
Step5: Dictionaries
Step6: set
Step7: List comprehention
Step8: Generators
|
<ASSISTANT_TASK:>
Python Code:
t=0
if t > 60:
print('its very hot')
elif t > 50:
print('its hot')
elif t > 40:
print('its warm')
else:
print('its cool')
t=55
if t > 40:
print('its very hot')
elif t > 50:
print('its hot')
elif t > 60:
print('its warm')
else:
print('its cool')
i=0
while i<10:
print(i)
i+=1
from random import randint
children = 0
boy = 0
for i in range(10000):
gender = randint(0,1) # boy=1, girl=0
children += 1
while gender != 0:
boy += gender
gender = randint(0,1)
children += 1
print(boy/children)
for i in range(10):
print(i)
if i == 5:
break
for i in range(10):
print(i)
if i > 5:
continue
print("Hey")
def func():
pass
func()
t = (0,1,'test')
print(t)
t[0]=1
(1,)
d = {}
d['name'] = 'Hamed'
d['family name'] = 'Seyed-allaei'
d[0]=12
d['a']=''
print(d)
print(d['name'])
print(d[0])
for i,j in d.items():
print(i,j)
a = set(['c', 'a','b','b'])
b = set(['c', 'd','e'])
print(a,b)
a | b
a & b
a - b
b - a
a ^ b
l = []
for i in range(10):
l.append(i*i)
print(l)
[i*i for i in range(10)]
{i:i**2 for i in range(10)}
def myrange(n):
i = 0
while i < n:
yield i
yield i**2
i+=1
x = myrange(10)
type(x)
next(x)
[i for i in myrange(10)]
for i in myrange(10):
print(i)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This model is by no means great, but it does predict with .63 recall and .54 precision.
Step2: This is a much worse model, and it is always predicting a value of 7.
Step3: Again, this is a lousy model like the previous one, as it just predicts 9.
Step4: These neural network based models are giving the same results, except for the convolutional network. We'll try it again for a logistic regression, but pre-flattening the data.
Step5: And the same results as with the previous logistic regression.
Step6: With SciKitLearn's logistic regression algorithm, we get much better (but not good) precision than with the
Step7: The random forest is actually working much better than the neural networks and logistic regression models, and better precision than even the convolutional neural network. The convolutional neural network still has the best recall, however. It's still not a great model, but it has precision and recall better then 50%, however, most test records are being predicted as 9 as in the other models. Apparently, our data is unbalanced among the classes - there are tons of category 'plain' compared to others.
|
<ASSISTANT_TASK:>
Python Code:
# re-load the saved data if needed
A = np.load('/home/nick/Documents/LewisUniversity/MachineLearning/Project/visionmatrix.npy')
#Let's start with the model parameters defined in the Week6 notebook for this data, changing the input shape as appropriate.
from keras.models import Sequential
from keras.layers import Dense, Dropout, Convolution2D, MaxPooling2D, Flatten
from keras.regularizers import l2, l1
from keras.optimizers import SGD
from keras import backend as K
K.set_image_dim_ordering('th')
model = Sequential()
model.add(Convolution2D(32, 5, 5, border_mode='valid', input_shape=(3, 50, 50), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(17, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
#Because of time constraints, we'll just run 10 epochs instead of say, 20.
model = model.fit(X_train, y_train, batch_size = 256,
nb_epoch = 10, verbose=2, validation_data=(X_test,y_test))
predicted = model.model.predict_classes(X_test)
y_train.shape
predicted
from sklearn.metrics import classification_report, f1_score, accuracy_score, confusion_matrix
# put the y_test back into a format of non-one-hot encoded for comparison
y_test_orig = pd.DataFrame(y_test).idxmax(1)
print "Convolution Network Results\nConfusion Matrix: "
print confusion_matrix(y_test_orig,predicted)
print "\n Classifcation Report"
print classification_report(y_test_orig,predicted)
# Let's try with the Keras documentation example
# apply a 5x5 convolution with 32 output filters on a 50x50 image:
model = Sequential()
model.add(Convolution2D(16, 3, 3, border_mode='valid', input_shape=(3, 50, 50), activation='relu'))
# add a 3x3 convolution on top, with 16 output filters:
#model.add(Convolution2D(16, 3, 3, border_mode='same', activation='relu'))
# and flatten it and add a dense layer to get it to match the 17 categories.
model.add(Flatten())
model.add(Dense(17, activation='relu'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model = model.fit(X_train, y_train, batch_size = 256,
nb_epoch = 5, verbose=2, validation_data=(X_test,y_test))
predicted = model.model.predict_classes(X_test)
# put the y_test back into a format of non-one-hot encoded for comparison
y_test_orig = pd.DataFrame(y_test).idxmax(1)
print "Convolution Network Results\nConfusion Matrix: "
print confusion_matrix(y_test_orig,predicted)
print "\n Classifcation Report"
print classification_report(y_test_orig,predicted)
# Let's try something totally different, a neural network based on the homework from week 5:
model = Sequential()
model.add(Flatten(input_shape=(3, 50, 50)))
model.add(Dense(output_dim=100,
activation='sigmoid', W_regularizer=l2(0.01)))
model.add(Dense(output_dim=500, activation='sigmoid', W_regularizer=l2(0.01)))
model.add(Dense(output_dim=17, activation='sigmoid', W_regularizer=l2(0.01)))
# Compile model
sgd = SGD(lr=0.1)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
model.summary()
# Fit the model
model = model.fit(X_train, y_train, batch_size = 256,
nb_epoch = 20, verbose=2, validation_data=(X_test,y_test))
predicted = model.model.predict_classes(X_test)
# put the y_test back into a format of non-one-hot encoded for comparison
y_test_orig = pd.DataFrame(y_test).idxmax(1)
print "Neural Network Results\nConfusion Matrix: "
print confusion_matrix(y_test_orig,predicted)
print "\n Classifcation Report"
print classification_report(y_test_orig,predicted)
# let's try logistic regression.
# Stochastic Logistic Regression
model = Sequential()
# validation loss
model.add(Flatten(input_shape=(3, 50, 50)))
model.add(Dense(output_dim=17, activation='sigmoid', W_regularizer=l1(0.01)))
# Compile model
sgd = SGD(lr=0.1)
model.compile(loss='mean_squared_error', optimizer=sgd, metrics=['accuracy'])
model.summary()
# Fit the model
model = model.fit(X_train, y_train, batch_size = 256,
nb_epoch = 100, verbose=0, validation_data=(X_test,y_test))
predicted = model.model.predict_classes(X_test)
# put the y_test back into a format of non-one-hot encoded for comparison
y_test_orig = pd.DataFrame(y_test).idxmax(1)
print "Logistic Regression Results\nConfusion Matrix: "
print confusion_matrix(y_test_orig,predicted)
print "\n Classifcation Report"
print classification_report(y_test_orig,predicted)
# reshape the training and test data
X_train_new = np.empty(shape=[X_train.shape[0]] + [7500], dtype='float32')
for i in range(X_train.shape[0]):
X_train_new[i,:] = X_train[i,:,:,:].flatten()
X_test_new = np.empty(shape=[X_test.shape[0]] + [7500], dtype='float32')
for i in range(X_test.shape[0]):
X_test_new[i,:] = X_test[i,:,:,:].flatten()
print X_test_new.shape
print X_train_new.shape
# Stochastic Logistic Regression
model = Sequential()
# validation loss
model.add(Dense(output_dim=17, input_shape=[7500],
activation='sigmoid', W_regularizer=l2(0)))
# Compile model
sgd = SGD(lr=0.1)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.summary()
# Fit the model
model = model.fit(X_train_new, y_train, batch_size = 256,
nb_epoch = 100, verbose=0, validation_data=(X_test_new,y_test))
predicted = model.model.predict_classes(X_test_new)
# put the y_test back into a format of non-one-hot encoded for comparison
y_test_orig = pd.DataFrame(y_test).idxmax(1)
print "Logistic Regression Results\nConfusion Matrix: "
print confusion_matrix(y_test_orig,predicted)
print "\n Classifcation Report"
print classification_report(y_test_orig,predicted)
# We will try sklearn's logistic regression.
from sklearn.linear_model import LogisticRegression
# regular
y_train_orig = pd.DataFrame(y_train).idxmax(1)
y_test_orig = pd.DataFrame(y_test).idxmax(1)
lr = LogisticRegression()
lr.fit(X_train_new, y_train_orig)
predicted = lr.predict(X_test_new)
print "Logistic Regression Results\nConfusion Matrix: "
print confusion_matrix(y_test_orig,predicted)
print "\n Classifcation Report"
print classification_report(y_test_orig,predicted)
#Finally, we'll try the random forest, which is not known for being great with image data.
from sklearn.ensemble import RandomForestClassifier
from sklearn.grid_search import GridSearchCV
rfc = RandomForestClassifier(random_state=47, n_estimators=100)
rfc.fit(X_train_new, y_train_orig)
predicted = rfc.predict(X_test_new)
print "Decision Tree Results\nConfusion Matrix: "
print confusion_matrix(y_test_orig,predicted)
print "\n Classifcation Report"
print classification_report(y_test_orig,predicted)
import matplotlib.pyplot as plt
g=imgdf.groupby(['category']).count()['_unit_id']
objects = g.index
y_pos = np.arange(len(objects))
performance = g
plt.bar(y_pos, performance, align='center', alpha=0.5)
#plt.xticks(y_pos, objects)
plt.xticks(y_pos, objects, rotation='vertical')
plt.ylabel('Class Counts')
plt.title('Dress Classes in Data')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters and read data
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Barachant <alexandre.barachant@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cross_validation import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.preprocessing import MinMaxScaler
from mne import io, pick_types, read_events, Epochs
from mne.datasets import sample
from mne.preprocessing import Xdawn
from mne.decoding import Vectorizer
from mne.viz import tight_layout
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, method='iir')
events = read_events(event_fname)
picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
add_eeg_ref=False, verbose=False)
# Create classification pipeline
clf = make_pipeline(Xdawn(n_components=3),
Vectorizer(),
MinMaxScaler(),
LogisticRegression(penalty='l1'))
# Get the labels
labels = epochs.events[:, -1]
# Cross validator
cv = StratifiedKFold(y=labels, n_folds=10, shuffle=True, random_state=42)
# Do cross-validation
preds = np.empty(len(labels))
for train, test in cv:
clf.fit(epochs[train], labels[train])
preds[test] = clf.predict(epochs[test])
# Classification report
target_names = ['aud_l', 'aud_r', 'vis_l', 'vis_r']
report = classification_report(labels, preds, target_names=target_names)
print(report)
# Normalized confusion matrix
cm = confusion_matrix(labels, preds)
cm_normalized = cm.astype(float) / cm.sum(axis=1)[:, np.newaxis]
# Plot confusion matrix
plt.imshow(cm_normalized, interpolation='nearest', cmap=plt.cm.Blues)
plt.title('Normalized Confusion matrix')
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 你的數學書在吶喊著:
Step2: <img src="img/Dctjpeg.png" width="600"/>
Step3: $$ G = {DCT} \cdot f \cdot {DCT}^{T} $$
Step4: Hybrid Image
Step5: 更多更多
|
<ASSISTANT_TASK:>
Python Code:
!./rungeogebra
show_dct_fig()
img = mpimg.imread('img/abel.jpg'); plt.imshow(img, cmap=mpl.cm.gray);
show_image(img)
tiny = img[40:48, 64:72];show_image(tiny)
tinyDCT = doDCT(tiny);show_image(tinyDCT)
figure(figsize=(12,36))
for u in range(12):
subplot(6, 2, u+1)
title(str(u))
imshow(dct_all(img, u), cmap=mpl.cm.gray, interpolation="none")
show_dct_fig()
w('掌法')
w('蛤蟆功')-w('歐陽鋒')+ w('洪七公')
w("華箏")-w("郭靖") + w("周師兄")
w('丘處機') + (w('鐵杖')-w("柯鎮惡"))
w("鐵掌") + (w('周伯通')-w("空明拳"))
w('降龍十八掌')-w("郭靖")
w("飯")-w("吃")+w("喝")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Further Assumptions Of The Least Squares Model
Step2: Above does not match the book, could not get code to work as shown.
|
<ASSISTANT_TASK:>
Python Code:
def predict(x_i, beta):
return dot(x_i, beta)
def error(x_i, y_i, beta):
return y_i - predict(x_i, beta)
def squared_error(x_i, y_i, beta):
return error(x_i, y_i, beta)**2
def squared_error_gradient(x_i, y_i, beta):
return [-2 * x_ij * error(x_i, y_i, beta) for x_ij in x_i]
def estimate_beta(x, y):
beta_initial = [random.random() for x_i in x[0]]
return minimize_stochastic(squared_error, squared_error_gradient, x, y, beta_initial, 0.001)
x = [1, # constant term,
49, # number of friends
4, # work hours per day
0] # does not have a PhD
random.seed(0)
beta = estimate_beta([x], daily_minutes_clean)
beta
def bootstrap_sample(data):
return [random.choice(data) for _ in data]
def bootstrap_statistic(data, stats_fn, num_samples):
return [stats_fn(bootstrap_sample(data)) for _ in range(num_samples)]
close_to_100 = [99.5 + random.random() for _ in range(101)]
far_from_100 = ([99.5 + random.random()] +
[random.random() for _ in range(50)] +
[200 + random.random() for _ in range(50)])
bootstrap_statistic(close_to_100, median, 100)[:5]
bootstrap_statistic(far_from_100, median, 100)[:5]
standard_deviation(close_to_100), standard_deviation(far_from_100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's Import Some Data through NOAA
Step2: Take a peak to ensure everything was read successfully and understand the dataset that you have
Step3: Select what you're looking for
Step4: Turn list of urls into one large, combined (concatenated) dataset based on time
Step5: Resample into daily averaged
Step6: Write out data for processing
|
<ASSISTANT_TASK:>
Python Code:
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import netCDF4 as nc
from mpl_toolkits.basemap import Basemap
%%time
heights = [] # empty array to append opened netCDF's to
temps = []
date_range = np.arange(1995,2001,1) # range of years interested in obtaining, remember python starts counting at 0 so for 10 years we actually need to say through 2005
for i in date_range:
url_h = 'https://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/hgt.%s.nc' % i # string subset --> %.s and % i will insert the i in date_range we are looping through
url_t = 'https://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.%s.nc' % i
print(url_h, url_t)
h1 = xr.open_dataset(url_h)
t1 = xr.open_dataset(url_t) # open netcdfs with xarray
heights.append(h1) # append opened files
temps.append(t1)
heights[0]
temps[0]
%%time
for i in range(len(heights)):
heights[i] = heights[i].sel(lat = slice(90,0), level = 500)
temps[i] = temps[i].sel(lat = slice(90,0), level = 925)
%%time
concat_h = xr.concat(heights, dim = 'time') # aligns all the lat, lon, lev, values of all the datasets based on dimesnion of time
%%time
concat_t = xr.concat(temps, dim = 'time')
%%time
concat_h = concat_h.resample(time = '24H').mean(dim = 'time') # resample can take various time arguments such as season, year, minutes
concat_t = concat_t.resample(time = '24H').mean(dim = 'time')
%%time
concat_h.to_netcdf('heights_9520.nc')
%%time
concat_t.to_netcdf('temps_9520.nc')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make sure it all works by running a test case
Step2: Setup that went into test/test_decam_rex.py
Step7: "brick 1102p240" is in the survey-bricks.fits.gz table.
|
<ASSISTANT_TASK:>
Python Code:
module use /global/common/$NERSC_HOST/contrib/desi/modulefiles
module load desiconda/20170719-1.1.9-imaging
conda create --prefix $CSCRATCH/conda-envs/20170719-1.1.9-imaging --file $DESICONDA/pkg_list.txt
source activate $CSCRATCH/conda-envs/20170719-1.1.9-imaging
# SF98 dust maps
export MYDIR=$CSCRATCH/repos
mkdir -p $MYDIR/dust/maps
cd $MYDIR/dust/maps
wget -c http://portal.nersc.gov/project/cosmo/temp/dstn/travis-ci/maps/SFD_dust_4096_ngp.fits
wget -c http://portal.nersc.gov/project/cosmo/temp/dstn/travis-ci/maps/SFD_dust_4096_sgp.fits
export DUST_DIR=$MYREPO/dust
# imaging pipeline
cd $MYDIR
git clone https://github.com/legacysurvey/legacypipe.git
cd legacypipe
git checkout f4fc46ea0
cd py/test
wget https://raw.githubusercontent.com/legacysurvey/obiwan/master/py/obiwan/test/end_to_end/test_decam_rex.p
cd ../
python py/test_decam_rex.py
main(args=['--brick', '1102p240', '--zoom', '500', '600', '650', '750',
'--force-all', '--no-write', '--no-wise',
'--survey-dir', surveydir,
'--outdir', outdir])
# https://github.com/legacysurvey/legacypipe/blob/master/py/legacypipe/decam.py
# Edited to be barebones
from __future__ import print_function
import os
import numpy as np
import fitsio
from legacypipe.image import CalibMixin
from legacypipe.cpimage import CPImage
from legacypipe.survey import LegacySurveyData
'''
Code specific to images from the Dark Energy Camera (DECam).
'''
class DecamImage(CPImage, CalibMixin):
'''
A LegacySurveyImage subclass to handle images from the Dark Energy
Camera, DECam, on the Blanco telescope.
'''
# background subtraction fitting spline to every 512 pixels, interpolating between
splinesky_boxsize = 512
def __init__(self, survey, t):
super(DecamImage, self).__init__(survey, t)
# Check that these are set properly
#self.imgfn # image relative path starting from legacy_survey_dir/images/
#self.dqfn # bad pixel image
#self.wtfn # invvar image
# zeropoint units in ADU/sec
self.ccdzpt += 2.5 * np.log10(self.exptime)
def __str__(self):
return 'DECam ' + self.name
@classmethod
def nominal_zeropoints(self):
# Units ADU/sec for a good night 2 years ago
return dict(g = 25.08,
r = 25.29,
z = 24.92,)
@classmethod
def photometric_ccds(self, survey, ccds):
Remove exposures from survey-ccds-1.fits.gz if image quality not
good enough
Args:
ccds: the survey-ccds-1.fits.gz table
# Nominal zeropoints (DECam)
z0 = self.nominal_zeropoints()
z0 = np.array([z0[f[0]] for f in ccds.filter])
# You can skipping removing any of them with:
good = np.ones(len(ccds), bool)
return np.flatnonzero(good)
@classmethod
def ccd_cuts(self, survey, ccds):
return np.zeros(len(ccds), np.int32)
def get_good_image_subregion(self):
Optional
x0,x1,y0,y1 = None,None,None,None
# x0,x1,y0,y1 = 100,1023,100,self.height-100
return x0,x1,y0,y1
def read_dq(self, header=False, **kwargs):
read bad pixel image and possibly its header
print('Reading data quality from', self.dqfn, 'hdu', self.hdu)
dq,hdr = self._read_fits(self.dqfn, self.hdu, header=True, **kwargs)
if header:
return dq,hdr
else:
return dq
def read_invvar(self, clip=True, clipThresh=0.2, **kwargs):
print('Reading inverse-variance from', self.wtfn, 'hdu', self.hdu)
invvar = self._read_fits(self.wtfn, self.hdu, **kwargs)
return invvar
def run_calibs(self, psfex=True, sky=True, se=False,
funpack=False, fcopy=False, use_mask=True,
force=False, just_check=False, git_version=None,
splinesky=False):
Read psfex PSF model and splinesky model
self.read_psf_model(0, 0, pixPsf=True, hybridPsf=True)
self.read_sky_model(splinesky=splinesky)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TODO
Step2: \begin{eqnarray}
Step3: \begin{eqnarray}
Step4: \begin{eqnarray}
Step5: \begin{eqnarray}
Step6: \begin{eqnarray}
Step7: \begin{eqnarray}
Step8: \begin{eqnarray}
Step9: \begin{eqnarray}
Step10: \begin{eqnarray}
Step11: \begin{eqnarray}
Step12: \begin{eqnarray}
Step13: \begin{eqnarray}
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (6, 6)
import math
import cmath # math functions for complex numbers
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets
from ipywidgets import interact
import sympy as sp
# See: http://docs.sympy.org/latest/tutorial/printing.html
sp.init_printing()
t = sp.symbols("t")
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.cos(0), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.cos(t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.sin(t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.cos(2*t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.sin(2*t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4
n = 0
sp.plot(sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4, (t, -sp.pi, sp.pi));
import mpmath
n = 2
mpmath.cplot(lambda t: mpmath.exp(-mpmath.j * n * t), points=100000)
n = 0
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
n = 1
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
n = -1
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
n = 2
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
n = -2
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
n = 3
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
n = -3
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
eq1 = sp.simplify(8 * sp.exp(0) \
+ 1 - 2 * sp.I * sp.exp(-sp.I * t) \
+ 1 + 2 * sp.I * sp.exp( sp.I * t) \
+ 3 * sp.I * sp.exp(-2* sp.I * t) \
- 3 * sp.I * sp.exp( 2* sp.I * t) )
eq1
sp.simplify(eq1 - sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2 * t) + 4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This code produces what looks like a relatively simple animation of two side-by-side figures, but is actually a deeply nested data structure
Step2: The structure of this object can be seen in the repr()
Step3: Nesting hierarchy <a id='NestingHierarchy'></a>
Step4: GridSpace Level
Step5: HoloMap Level
Step6: As shown in the schematic above, a HoloMap contains many elements with associated keys. In this example, these keys are indexed with a dimension Frequency, which is why the Frequency varies when you play the animation here.
Step7: NdOverlay Level
Step8: Curve Level
Step9: Data Level
Step10: Of course, you can keep going down into the Numpy array as far as it goes, to get down to a single datapoint, in this case the value at x=5.2. Note that the supplied index has to match the x-value of the underlying data exactly to floating point precision, so only use this if you know exactly what you are doing
Step11: Here the value returned is the y-value of the underlying data, at last! Of course, you can also use all of the access methods provided by the numpy array itself, using .data, e.g. .data[52], but note that this will use the native indexing scheme of the array, i.e. integer access, starting at zero, not the continuous coordinate system we provide through HoloViews.
Step12: The second form demonstrates HoloViews' deep indexing feature. This is as far as we can index before reaching a heterogeneous type (the Overlay), where we need to use attribute access. Here is the more explicit method of indexing down to a curve
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import holoviews as hv
%reload_ext holoviews.ipython
np.random.seed(10)
def sine_curve(phase, freq, amp, power, samples=102):
xvals = [0.1* i for i in range(samples)]
return [(x, amp*np.sin(phase+freq*x)**power) for x in xvals]
phases = [0, np.pi/2, np.pi, 3*np.pi/2]
powers = [1,2,3]
amplitudes = [0.5,0.75, 1.0]
frequencies = [0.5, 0.75, 1.0, 1.25, 1.5, 1.75]
gridspace = hv.GridSpace(kdims=['Amplitude', 'Power'], group='Parameters', label='Sines')
for power in powers:
for amplitude in amplitudes:
holomap = hv.HoloMap(kdims=['Frequency'])
for frequency in frequencies:
sines = {phase : hv.Curve(sine_curve(phase, frequency, amplitude, power))
for phase in phases}
ndoverlay = hv.NdOverlay(sines , kdims=['Phase']).relabel(group='Phases',
label='Sines', depth=1)
overlay = ndoverlay * hv.Points([(i,0) for i in range(0,10)], group='Markers', label='Dots')
holomap[frequency] = overlay
gridspace[amplitude, power] = holomap
penguins = hv.RGB.load_image('../assets/penguins.png').relabel(group="Family", label="Penguin")
layout = gridspace + penguins
layout
print(repr(layout))
print(repr(layout))
layout.Parameters.Sines
layout.Parameters.Sines[0.5, 1]
layout.Parameters.Sines[0.5, 1][1.0]
(layout.Parameters.Sines[0.5, 1][1].Phases.Sines +
layout.Parameters.Sines[0.5, 1][1].Markers.Dots)
l=layout.Parameters.Sines[0.5, 1][1].Phases.Sines[0.0]
l
repr(l)
type(layout.Parameters.Sines[0.5, 1][1].Phases.Sines[0.0].data)
layout.Parameters.Sines[0.5, 1][1].Phases.Sines[0.0][5.2]
o1 = layout.Parameters.Sines.select(Amplitude=0.5, Power=1.0).select(Frequency=1.0)
o2 = layout.Parameters.Sines.select(Amplitude=0.5, Power=1.0, Frequency=1.0)
o1 + o2
layout.Parameters.Sines.select(Amplitude=0.5,Power=1.0,
Frequency=1.0).Phases.Sines.select(Phase=0.0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Understanding Movie Similarity
Step2: Creating recommendations for your personal ratings
|
<ASSISTANT_TASK:>
Python Code:
import os
os.chdir('..')
# Import all the packages we need to generate recommendations
import numpy as np
import pandas as pd
import src.utils as utils
import src.recommenders as recommenders
import src.similarity as similarity
# imports necesary for plotting
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# Enable logging on Jupyter notebook
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# loads dataset
dataset_folder = os.path.join(os.getcwd(), 'data')
dataset_folder_ready = utils.load_dataset(dataset_folder)
# adds personal ratings to original dataset ratings file.
ratings_file = os.path.join(dataset_folder, 'ml-latest-small','ratings-merged.csv')
[ratings, my_customer_number] = utils.merge_datasets(dataset_folder_ready, ratings_file)
# the data is stored in a long pandas dataframe
# we need to pivot the data to create a [user x movie] matrix
ratings_matrix = ratings.pivot_table(index='customer', columns='movie', values='rating', fill_value=0)
ratings_matrix = ratings_matrix.transpose()
# find similar movies
# try with different movie titles and see what happens
movie_title = 'Star Wars: Episode VI - Return of the Jedi (1983)'
similarity_type = "cosine"
logger.info('top-10 movies similar to %s, using %s similarity', movie_title, similarity_type)
print(similarity.compute_nearest_neighbours(movie_title, ratings_matrix, similarity_type)[0:10])
# find similar movies
# try with different movie titles and see what happens
movie_title = 'All About My Mother (Todo sobre mi madre) (1999)'
similarity_type = "pearson"
logger.info('top-10 movies similar to: %s, using %s similarity', movie_title, similarity_type)
print(similarity.compute_nearest_neighbours(movie_title, ratings_matrix, similarity_type)[0:10])
# get recommendations for a single user
recommendations = recommenders.recommend_uknn(ratings, my_customer_number, K=200, similarity_metric='cosine', N=10)
recommendations
# get recommendations for a single user
recommendations = recommenders.recommend_iknn(ratings, my_customer_number, K=100, similarity_metric='cosine')
recommendations
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Style Transfer
Step2: Load the pretrained SqueezeNet model. This model has been ported from PyTorch, see cs231n/classifiers/squeezenet.py for the model architecture.
Step4: Computing Loss
Step5: Test your content loss. You should see errors less than 0.001.
Step7: Style loss
Step8: Test your Gram matrix code. You should see errors less than 0.001.
Step10: Next, implement the style loss
Step11: Test your style loss implementation. The error should be less than 0.001.
Step13: Total-variation regularization
Step14: Test your TV loss implementation. Error should be less than 0.001.
Step16: Style Transfer
Step17: Generate some pretty pictures!
Step18: Feature Inversion
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
from scipy.misc import imread, imresize
import numpy as np
from scipy.misc import imread
import matplotlib.pyplot as plt
# Helper functions to deal with image preprocessing
from cs231n.image_utils import load_image, preprocess_image, deprocess_image
%matplotlib inline
def get_session():
Create a session that dynamically allocates memory.
# See: https://www.tensorflow.org/tutorials/using_gpu#allowing_gpu_memory_growth
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
return session
def rel_error(x,y):
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Older versions of scipy.misc.imresize yield different results
# from newer versions, so we check to make sure scipy is up to date.
def check_scipy():
import scipy
vnum = int(scipy.__version__.split('.')[1])
assert vnum >= 16, "You must install SciPy >= 0.16.0 to complete this notebook."
check_scipy()
from cs231n.classifiers.squeezenet import SqueezeNet
import tensorflow as tf
tf.reset_default_graph() # remove all existing variables in the graph
sess = get_session() # start a new Session
# Load pretrained SqueezeNet model
SAVE_PATH = 'cs231n/datasets/squeezenet.ckpt'
if not os.path.exists(SAVE_PATH):
raise ValueError("You need to download SqueezeNet!")
model = SqueezeNet(save_path=SAVE_PATH, sess=sess)
# Load data for testing
content_img_test = preprocess_image(load_image('styles/tubingen.jpg', size=192))[None]
style_img_test = preprocess_image(load_image('styles/starry_night.jpg', size=192))[None]
answers = np.load('style-transfer-checks-tf.npz')
def content_loss(content_weight, content_current, content_original):
Compute the content loss for style transfer.
Inputs:
- content_weight: scalar constant we multiply the content_loss by.
- content_current: features of the current image, Tensor with shape [1, height, width, channels]
- content_target: features of the content image, Tensor with shape [1, height, width, channels]
Returns:
- scalar content loss
pass
def content_loss_test(correct):
content_layer = 3
content_weight = 6e-2
c_feats = sess.run(model.extract_features()[content_layer], {model.image: content_img_test})
bad_img = tf.zeros(content_img_test.shape)
feats = model.extract_features(bad_img)[content_layer]
student_output = sess.run(content_loss(content_weight, c_feats, feats))
error = rel_error(correct, student_output)
print('Maximum error is {:.3f}'.format(error))
content_loss_test(answers['cl_out'])
def gram_matrix(features, normalize=True):
Compute the Gram matrix from features.
Inputs:
- features: Tensor of shape (1, H, W, C) giving features for
a single image.
- normalize: optional, whether to normalize the Gram matrix
If True, divide the Gram matrix by the number of neurons (H * W * C)
Returns:
- gram: Tensor of shape (C, C) giving the (optionally normalized)
Gram matrices for the input image.
pass
def gram_matrix_test(correct):
gram = gram_matrix(model.extract_features()[5])
student_output = sess.run(gram, {model.image: style_img_test})
error = rel_error(correct, student_output)
print('Maximum error is {:.3f}'.format(error))
gram_matrix_test(answers['gm_out'])
def style_loss(feats, style_layers, style_targets, style_weights):
Computes the style loss at a set of layers.
Inputs:
- feats: list of the features at every layer of the current image, as produced by
the extract_features function.
- style_layers: List of layer indices into feats giving the layers to include in the
style loss.
- style_targets: List of the same length as style_layers, where style_targets[i] is
a Tensor giving the Gram matrix the source style image computed at
layer style_layers[i].
- style_weights: List of the same length as style_layers, where style_weights[i]
is a scalar giving the weight for the style loss at layer style_layers[i].
Returns:
- style_loss: A Tensor contataining the scalar style loss.
# Hint: you can do this with one for loop over the style layers, and should
# not be very much code (~5 lines). You will need to use your gram_matrix function.
pass
def style_loss_test(correct):
style_layers = [1, 4, 6, 7, 10]
style_weights = [300000, 1000, 15, 3]
feats = model.extract_features()
style_target_vars = []
for idx in style_layers:
style_target_vars.append(gram_matrix(feats[idx]))
style_targets = sess.run(style_target_vars,
{model.image: style_img_test})
s_loss = style_loss(feats, style_layers, style_targets, style_weights)
student_output = sess.run(s_loss, {model.image: content_img_test})
error = rel_error(correct, student_output)
print('Error is {:.3f}'.format(error))
style_loss_test(answers['sl_out'])
def tv_loss(img, tv_weight):
Compute total variation loss.
Inputs:
- img: Tensor of shape (1, H, W, 3) holding an input image.
- tv_weight: Scalar giving the weight w_t to use for the TV loss.
Returns:
- loss: Tensor holding a scalar giving the total variation loss
for img weighted by tv_weight.
# Your implementation should be vectorized and not require any loops!
pass
def tv_loss_test(correct):
tv_weight = 2e-2
t_loss = tv_loss(model.image, tv_weight)
student_output = sess.run(t_loss, {model.image: content_img_test})
error = rel_error(correct, student_output)
print('Error is {:.3f}'.format(error))
tv_loss_test(answers['tv_out'])
def style_transfer(content_image, style_image, image_size, style_size, content_layer, content_weight,
style_layers, style_weights, tv_weight, init_random = False):
Run style transfer!
Inputs:
- content_image: filename of content image
- style_image: filename of style image
- image_size: size of smallest image dimension (used for content loss and generated image)
- style_size: size of smallest style image dimension
- content_layer: layer to use for content loss
- content_weight: weighting on content loss
- style_layers: list of layers to use for style loss
- style_weights: list of weights to use for each layer in style_layers
- tv_weight: weight of total variation regularization term
- init_random: initialize the starting image to uniform random noise
# Extract features from the content image
content_img = preprocess_image(load_image(content_image, size=image_size))
feats = model.extract_features(model.image)
content_target = sess.run(feats[content_layer],
{model.image: content_img[None]})
# Extract features from the style image
style_img = preprocess_image(load_image(style_image, size=style_size))
style_feat_vars = [feats[idx] for idx in style_layers]
style_target_vars = []
# Compute list of TensorFlow Gram matrices
for style_feat_var in style_feat_vars:
style_target_vars.append(gram_matrix(style_feat_var))
# Compute list of NumPy Gram matrices by evaluating the TensorFlow graph on the style image
style_targets = sess.run(style_target_vars, {model.image: style_img[None]})
# Initialize generated image to content image
if init_random:
img_var = tf.Variable(tf.random_uniform(content_img[None].shape, 0, 1), name="image")
else:
img_var = tf.Variable(content_img[None], name="image")
# Extract features on generated image
feats = model.extract_features(img_var)
# Compute loss
c_loss = content_loss(content_weight, feats[content_layer], content_target)
s_loss = style_loss(feats, style_layers, style_targets, style_weights)
t_loss = tv_loss(img_var, tv_weight)
loss = c_loss + s_loss + t_loss
# Set up optimization hyperparameters
initial_lr = 3.0
decayed_lr = 0.1
decay_lr_at = 180
max_iter = 200
# Create and initialize the Adam optimizer
lr_var = tf.Variable(initial_lr, name="lr")
# Create train_op that updates the generated image when run
with tf.variable_scope("optimizer") as opt_scope:
train_op = tf.train.AdamOptimizer(lr_var).minimize(loss, var_list=[img_var])
# Initialize the generated image and optimization variables
opt_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=opt_scope.name)
sess.run(tf.variables_initializer([lr_var, img_var] + opt_vars))
# Create an op that will clamp the image values when run
clamp_image_op = tf.assign(img_var, tf.clip_by_value(img_var, -1.5, 1.5))
f, axarr = plt.subplots(1,2)
axarr[0].axis('off')
axarr[1].axis('off')
axarr[0].set_title('Content Source Img.')
axarr[1].set_title('Style Source Img.')
axarr[0].imshow(deprocess_image(content_img))
axarr[1].imshow(deprocess_image(style_img))
plt.show()
plt.figure()
# Hardcoded handcrafted
for t in range(max_iter):
# Take an optimization step to update img_var
sess.run(train_op)
if t < decay_lr_at:
sess.run(clamp_image_op)
if t == decay_lr_at:
sess.run(tf.assign(lr_var, decayed_lr))
if t % 100 == 0:
print('Iteration {}'.format(t))
img = sess.run(img_var)
plt.imshow(deprocess_image(img[0], rescale=True))
plt.axis('off')
plt.show()
print('Iteration {}'.format(t))
img = sess.run(img_var)
plt.imshow(deprocess_image(img[0], rescale=True))
plt.axis('off')
plt.show()
# Composition VII + Tubingen
params1 = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/composition_vii.jpg',
'image_size' : 192,
'style_size' : 512,
'content_layer' : 3,
'content_weight' : 5e-2,
'style_layers' : (1, 4, 6, 7, 8),
'style_weights' : (20000, 500, 12, 1),
'tv_weight' : 5e-2
}
style_transfer(**params1)
# Scream + Tubingen
params2 = {
'content_image':'styles/tubingen.jpg',
'style_image':'styles/the_scream.jpg',
'image_size':192,
'style_size':224,
'content_layer':3,
'content_weight':3e-2,
'style_layers':[1, 4, 6, 7, 10],
'style_weights':[200000, 800, 12, 1],
'tv_weight':2e-2
}
style_transfer(**params2)
# Starry Night + Tubingen
params3 = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/starry_night.jpg',
'image_size' : 192,
'style_size' : 192,
'content_layer' : 3,
'content_weight' : 6e-2,
'style_layers' : [1, 4, 6, 7, 10],
'style_weights' : [300000, 1000, 15, 3],
'tv_weight' : 2e-2
}
style_transfer(**params3)
# Feature Inversion -- Starry Night + Tubingen
params_inv = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/starry_night.jpg',
'image_size' : 192,
'style_size' : 192,
'content_layer' : 3,
'content_weight' : 6e-2,
'style_layers' : [1, 4, 6, 7, 10],
'style_weights' : [0, 0, 0, 0], # we discard any contributions from style to the loss
'tv_weight' : 2e-2,
'init_random': True # we want to initialize our image to be random
}
style_transfer(**params_inv)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A discussed in the grid tutorial, all data stored on the grid exists as "flat" one-dimensional arrays. This means that information can be retrieved from these grids using the ID of an grid element as the index
Step2: Any values we defined across the grid are indexed in the same way, e.g., an array of elevations would be of shape (n-nodes, ).
Step3: All the field creation routines share two optional keywords
Step4: Accessing a data field, deleting a data field
Step5: You'll also very commonly see some common "syntactic sugar" for this, where the element key is replaced by a grid property called grid.at_[element]. i.e.,
Step6: Because these structures are dictionary-like, we can use the usual set of Python dictionary methods to interact with them too
Step7: The units are recorded in a further dict-like structure attached to at_[element]
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from landlab import RasterModelGrid, FieldError
from landlab.components import LinearDiffuser
mg = RasterModelGrid((3, 4))
# demonstrate that arrays of properties are n-elements long
(
mg.x_of_node.shape == (mg.number_of_nodes,)
and mg.length_of_link.shape == (mg.number_of_links,)
)
# what's the length of the link with ID 6 (the 7th link)?
mg.length_of_link[6]
no_1a = mg.add_zeros("field__number_one", at="node")
no_1b = mg.add_ones(
"field__number_two", at="link", dtype=int
) # fns can also take dtype
no_1b[mg.active_links] = 0
print(no_1b)
input_array = np.arange(mg.number_of_nodes, dtype=float)
try:
no_1c = mg.add_field(
"field__number_one", input_array, at="node", copy=False, units="m"
)
except FieldError:
print("ERROR: This field name already exists!")
# ...let's try that again:
no_1c = mg.add_field(
"field__number_one", input_array, at="node", copy=False, units="m", clobber=True
)
print(no_1c)
# note that the keyword `copy=False` means that the field array *is* the input_array...
input_array[:] = -1.0
print(no_1c)
mg["node"]["field__number_one"]
mg.at_node["field__number_one"] is mg["node"]["field__number_one"]
mg.at_node.keys() # see the existing fields at nodes
mg.at_node.clear() # delete all fields at nodes
mg.at_node.keys()
mg.at_link.keys()
mg.at_link.pop("field__number_two") # return the field, and remove it from the array
mg.at_link.keys()
z = mg.add_ones("field__number_3", at="node", units="km", clobber=True)
mg.at_node.units["field__number_3"]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 - Dataset
Step2: Some EDA's tasks
Step3: 2 - Local airports (list with all the ~600 brazilian public airports)
Step4: 3 - List of codes (two letters) used when there was a flight delay (departure)
Step5: 4 - The Weather data from https
|
<ASSISTANT_TASK:>
Python Code:
from datetime import datetime
# Pandas and NumPy
import pandas as pd
import numpy as np
# Matplotlib for additional customization
from matplotlib import pyplot as plt
%matplotlib inline
# Seaborn for plotting and styling
import seaborn as sns
# 1. Flight delay: any flight with (real_departure - planned_departure >= 15 minutes)
# 2. The Brazilian Federal Agency for Civil Aviation (ANAC) does not define exactly what is a "flight delay" (in minutes)
# 3. Anyway, the ANAC has a resolution for this subject: https://goo.gl/YBwbMy (last access: nov, 15th, 2017)
# ---
# DELAY, for this analysis, is defined as greater than 15 minutes (local flights only)
DELAY = 15
# Dataset: a sample with 700 records with flights that occurred in the beginning of 2017 (January to June)
flights = pd.read_csv('data/arf2017ISO-sample.csv', sep = ';', dtype = str)
# Using "lambda" to format "date" columns to ISO format: YYYY-MM-DD hh:mm:ss
flights['departure-est'] = flights[['departure-est']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1)
flights['departure-real'] = flights[['departure-real']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1)
flights['arrival-est'] = flights[['arrival-est']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1)
flights['arrival-real'] = flights[['arrival-real']].apply(lambda row: row.str.replace("(?P<day>\d{2})/(?P<month>\d{2})/(?P<year>\d{4}) (?P<HOUR>\d{2}):(?P<MIN>\d{2})", "\g<year>/\g<month>/\g<day> \g<HOUR>:\g<MIN>:00"), axis=1)
# Departure and Arrival columns: from 'object' to 'date' format
flights['departure-est'] = pd.to_datetime(flights['departure-est'],errors='ignore')
flights['departure-real'] = pd.to_datetime(flights['departure-real'],errors='ignore')
flights['arrival-est'] = pd.to_datetime(flights['arrival-est'],errors='ignore')
flights['arrival-real'] = pd.to_datetime(flights['arrival-real'],errors='ignore')
# translate the flight status from portuguese to english
flights['flight-status'] = flights[['flight-status']].apply(lambda row: row.str.replace("REALIZADO", "ACCOMPLISHED"), axis=1)
flights['flight-status'] = flights[['flight-status']].apply(lambda row: row.str.replace("CANCELADO", "CANCELED"), axis=1)
flights[["departure-est", "departure-real", "arrival-est", "arrival-real", "flight-status"]].head(3)
# See: https://stackoverflow.com/questions/37287938/sort-pandas-dataframe-by-value
# or https://stackoverflow.com/questions/17618981/how-to-sort-pandas-data-frame-using-values-from-several-columns
df_departures = flights.groupby(['airport-A']).size().reset_index(name='number_departures')
df_departures.sort_values(by=['number_departures'], ascending=False, inplace=True)
df_departures.head()
with sns.axes_style('white'):
g = sns.factorplot("airport-A", data=flights, aspect=2,
kind="count", color='steelblue')
g.set_xticklabels(step=4)
g = sns.factorplot(x="airport-A", y="number_departures", data=df_departures, size=4, aspect=4)
# Get the hour of 'real depature' datetime
# See .dt acessor (http://pandas.pydata.org/pandas-docs/stable/basics.html#basics-dt-accessors)
flights['hour-departure-est'] = flights['departure-est'].dt.hour
flights['hour-departure-real'] = flights['departure-real'].dt.hour
flights['hour-arrival-est'] = flights['arrival-est'].dt.hour
flights['hour-arrival-real'] = flights['arrival-real'].dt.hour
flights.groupby(['airport-A', 'hour-departure-real']).size().groupby(level=1).max()
df_busy_hours = flights.groupby(['hour-departure-real']).size().reset_index(name='flight_by_hour')
df_busy_hours.sort_values(by=['flight_by_hour'], ascending=False, inplace=True)
df_busy_hours.head()
g = sns.factorplot(x="hour-departure-real", y="flight_by_hour", data=df_busy_hours, size=6, aspect=1.8)
with sns.axes_style('white'):
g = sns.factorplot("hour-departure-real", data=flights, size=5, aspect=3,
kind="count", color='steelblue')
g.set_xticklabels(step=1)
(flights['departure-real'] - flights['departure-est']).dt.components.head()
# Get the delta time (real - estimated) for depature and arrival times
# More about 'DateTime' at https://docs.scipy.org/doc/numpy-dev/reference/arrays.datetime.html#datetime-and-timedelta-arithmetic
flights['delay-departure']=(flights['departure-real'] - flights['departure-est'])
flights['delay-arrival']=(flights['arrival-real'] - flights['arrival-est'])
# Delay (departure and arrival) in minutes (using 'numpy.timedelta64')
# See this link: https://stackoverflow.com/questions/14920903/time-difference-in-seconds-from-numpy-timedelta64
flights['delay-departure']=np.divide(flights[['delay-departure']], np.timedelta64(1, 'm'))
flights['delay-arrival']=np.divide(flights[['delay-arrival']], np.timedelta64(1, 'm'))
sns.kdeplot(flights['delay-departure'] > 0, label='departures(delay)', shade=True)
plt.xlabel('delay (in minutes)');
# Airports dataset: all brazilian public airports (updated until october, 2017)
airports = pd.read_csv('data/brazilianPublicAirports-out2017.csv', sep = ';', dtype= str)
airports.head()
# Merge "flights" dataset with "airports" in order to identify
# local flights (origin and destination are in Brazil)
flights = pd.merge(flights, airports, left_on="airport-A", right_on="airport", how='left')
flights = pd.merge(flights, airports, left_on="airport-B", right_on="airport", how='left')
with sns.axes_style('white'):
g = sns.factorplot("airport-A", data=flights, aspect=2,
kind="count", color='steelblue')
g.set_xticklabels(step=5)
# ------------------------------------------------------------------
# List of codes (two letters) used to justify a delay on the flight
# - delayCodesShortlist.csv: list with YYY codes
# - delayCodesLongList.csv: list with XXX codes
# ------------------------------------------------------------------
delaycodes = pd.read_csv('data/delayCodesShortlist.csv', sep = ';', dtype = str)
delaycodesLongList = pd.read_csv('data/delayCodesLonglist.csv', sep = ';', dtype = str)
delaycodes.head()
# Weather sample: load the CSV with weather historical data (from Campinas, SP, Brazil, 2017)
weather = pd.read_csv('data/DataScience-Intensive-weatherAtCampinasAirport-2017-Campinas_Airport_2017Weather.csv', \
sep = ',', dtype = str)
weather["date"] = weather["year"].map(str) + "-" + weather["month"].map(str) + "-" + weather["day"].map(str)
weather["date"] = pd.to_datetime(weather['date'],errors='ignore')
weather.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load metadata for 4 datasets
Step2: The individual filenames will look like this
Step3: Merging alignment data
Step4: Unfortunately filenames and sequence names tend to get a bit mangled when going from Fast5 to SAM, for various reasons. As of now, there is no particular convention for naming read files or naming the exported sequences. On the one hand I don't feel like it is a good idea to abuse filenames as character seperated database rows, on the other hand, using the unique read id from the Fast5 File isn't very human-friendly either.
Step5: Visualizing the alignments
Step6: Processing the amplicons
Step7: How many reads failed to be assigned?
Step8: Purge these
Step9: The number of viable reads is diminishing quickly. But this can't be helped.
Step10: Unfortunately some amplicons are severely underrepresented, with one going as low as 635 reads.
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn
import porekit
import re
import pysam
import random
import feather
%matplotlib inline
directories = ["AmpliconOddEvenControl", "AmpliconOddReadUntil", "AmpliconEvenReadUntil", "Balanced"]
meta_frames = []
for d in directories:
print(d)
df = porekit.gather_metadata("/home/andi/nanopore/%s" % d, workers=4)
print(df.shape[0])
meta_frames.append(df)
meta = pd.concat (meta_frames)
for df in meta_frames:
print (df.shape)
meta_frames[0].index.values[0]
meta_frames[1].index.values[0]
meta_frames[2].index.values[0]
meta_frames[3].index.values[0]
def sam_to_dataframe(file_name):
sam = pysam.AlignmentFile(file_name)
records = []
for i, segment in enumerate(sam):
d = dict()
for k in ["query_name", "reference_start", "reference_end", "mapping_quality", ]:
d[k] = getattr(segment, k)
records.append(d)
alignments = pd.DataFrame.from_records(records)
return alignments
base = "/home/andi/nanopore/RUFigs/data"
bams = ["/fig3/RU_dudu/RU_dudu_Template.bam",
"/fig3/RU_udud/RU_udud_Template.bam",
"/fig3/NO_RU/NO_RU_Template.bam",
"/fig4/200/200_Template.bam",
]
alignments = pd.concat([sam_to_dataframe(base+file_name) for file_name in bams])
regexp = re.compile(r'_(?P<a>\d+)_(?P<b>\d+)_ch(?P<c>\d+)_file(?P<d>\d+)')
def extract(s):
try:
return "_".join(regexp.search(s).groups())
except:
return ""
alignments["alignment_key"] = alignments.query_name.map(extract)
meta["alignment_key"] = meta.index.map(extract)
alignments["alignment_key"].map(lambda s: s.split("_")[0]).unique()
meta["run_number"] = meta["alignment_key"].map(lambda s: s.split("_")[0])
meta2 = meta.reset_index().merge(alignments).set_index("filename")
meta2.shape
meta = meta2
f, ax = plt.subplots()
f.set_figwidth(13)
ax.hist(meta.reference_start, bins=110);
amplicons = [(52,1980),
(2065,3965),
(4070,5989),
(6059,7981),
(8012,9947),
(10008,11963),
(12006,13941),
(14011,15945),
(16076,17987),
(18022,19972),
(20053,21979),
]
def amplicon_from_position(pos):
for i,c in enumerate(amplicons):
a,b = c
if a<=pos<=b:
return i
meta["amplicon"] = meta.reference_start.map(amplicon_from_position)
meta.amplicon.isnull().sum()
meta = meta[np.isnan(meta.amplicon)==False]
meta.shape
meta.query("template_length>500").groupby("amplicon").format.count()
sufficient = meta.query("template_length>=500")
all_files = sufficient.index.values
test_files = []
for i in range(11):
sub = sufficient[sufficient.amplicon==i]
test_files += list(np.random.choice(sub.index.values, 200))
training_files = list(set(sufficient.index.values) - set(test_files))
len(training_files), len(test_files)
test_data = sufficient.ix[np.array(test_files)]
feather.write_dataframe(test_data, "amplicon_test_metadata.feather")
training_data = sufficient.ix[np.array(training_files)]
feather.write_dataframe(training_data, "amplicon_training_metadata.feather")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2017 Hurricane Tracks
Step2: Plot one of the hurricanes
Step3: Plot all the hurricanes
|
<ASSISTANT_TASK:>
Python Code:
%bash
apt-get update
apt-get -y install python-mpltoolkits.basemap
from mpl_toolkits.basemap import Basemap
import google.datalab.bigquery as bq
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
query=
#standardSQL
SELECT
name,
latitude,
longitude,
iso_time,
usa_sshs
FROM
`bigquery-public-data.noaa_hurricanes.hurricanes`
WHERE
basin = 'NA'
AND season = '2017'
df = bq.Query(query).execute().result().to_dataframe()
df.head()
maria = df[df['name'] == 'MARIA'].sort_values('iso_time')
m = Basemap(llcrnrlon=-100.,llcrnrlat=0.,urcrnrlon=-20.,urcrnrlat=57.,
projection='lcc',lat_1=20.,lat_2=40.,lon_0=-60.,
resolution ='l',area_thresh=1000.)
x, y = m(maria['longitude'].values,maria['latitude'].values)
m.plot(x,y,linewidth=5,color='r')
# draw coastlines, meridians and parallels.
m.drawcoastlines()
m.drawcountries()
m.drawmapboundary(fill_color='#99ffff')
m.fillcontinents(color='#cc9966',lake_color='#99ffff')
m.drawparallels(np.arange(10,70,20),labels=[1,1,0,0])
m.drawmeridians(np.arange(-100,0,20),labels=[0,0,0,1])
plt.title('Hurricane Maria (2017)');
names = df.name.unique()
names
m = Basemap(llcrnrlon=-100.,llcrnrlat=0.,urcrnrlon=-20.,urcrnrlat=57.,
projection='lcc',lat_1=20.,lat_2=40.,lon_0=-60.,
resolution ='l',area_thresh=1000.)
for name in names:
if name != 'NOT_NAMED':
named = df[df['name'] == name].sort_values('iso_time')
x, y = m(named['longitude'].values,named['latitude'].values)
maxcat = max(named['usa_sshs'])
m.plot(x,y,linewidth=maxcat,color='b')
# draw coastlines, meridians and parallels.
m.drawcoastlines()
m.drawcountries()
m.drawmapboundary(fill_color='#99ffff')
m.fillcontinents(color='#cc9966',lake_color='#99ffff')
m.drawparallels(np.arange(10,70,20),labels=[1,1,0,0])
m.drawmeridians(np.arange(-100,0,20),labels=[0,0,0,1])
plt.title('Named North-Atlantic hurricanes (2017)');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Effect of bursting changes on PSD
Step2: 2. Effect of period changes on PSD
Step3: 3. Effect of symmetry changes on PSD
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import neurodsp
%matplotlib inline
import matplotlib.pyplot as plt
np.random.seed(0)
freq = 10
T = 100
Fs = 1000
cycle_features_use = {'amp_mean': 1, 'amp_burst_std': 0, 'amp_std': 0,
'period_mean': 100,
'period_burst_std': 0,
'period_std': 0,
'rdsym_mean': .3, 'rdsym_burst_std': 0, 'rdsym_std': 0}
prob_leave_burst = .2
loop_prob_enter_burst = [.1, .5, 1]
loop_oscs = []
for p_enter in loop_prob_enter_burst:
if p_enter == 1:
p_leave = 0
else:
p_leave = prob_leave_burst
osc = neurodsp.sim_bursty_oscillator(freq, T, Fs,
cycle_features = cycle_features_use,
prob_enter_burst=p_enter,
prob_leave_burst=p_leave)
loop_oscs.append(osc)
t = np.arange(0,T,1/Fs)
tidx = t <= 2
plt.figure(figsize=(12,3))
for i in range(len(loop_oscs)):
plt.plot(t[tidx], loop_oscs[i][tidx] + i*2)
plt.xlim((0,2))
plt.xlabel('Time (s)')
plt.ylabel('Voltage')
fs, psds = [], []
for i in range(len(loop_oscs)):
f, psd = neurodsp.spectral.psd(loop_oscs[i], Fs)
fs.append(f)
psds.append(psd)
plt.figure(figsize=(10,5))
for i in range(len(loop_oscs)):
fidx = fs[i] <= 100
plt.semilogy(fs[i][fidx], psds[i][fidx],
label=loop_prob_enter_burst[i])
plt.legend(title='burst prevalence')
freq = 10
T = 100
Fs = 1000
cycle_features_use = {'amp_mean': 1, 'amp_burst_std': 0, 'amp_std': 0,
'period_mean': 100,
'period_burst_std': 0,
'period_std': None,
'rdsym_mean': .3, 'rdsym_burst_std': 0, 'rdsym_std': 0}
prob_enter_burst = 1
prob_leave_burst = 0
loop_period_stds = [0, 3, 15]
loop_oscs = []
for per_std in loop_period_stds:
cycle_features_use['period_std'] = per_std
osc = neurodsp.sim_bursty_oscillator(freq, T, Fs,
cycle_features = cycle_features_use,
prob_enter_burst=prob_enter_burst,
prob_leave_burst=prob_leave_burst)
loop_oscs.append(osc)
t = np.arange(0,T,1/Fs)
tidx = t <= 2
plt.figure(figsize=(12,3))
for i in range(len(loop_oscs)):
plt.plot(t[tidx], loop_oscs[i][tidx] + i*2)
plt.xlim((0,2))
plt.xlabel('Time (s)')
plt.ylabel('Voltage')
fs, psds = [], []
for i in range(len(loop_oscs)):
f, psd = neurodsp.spectral.psd(loop_oscs[i], Fs)
fs.append(f)
psds.append(psd)
plt.figure(figsize=(10,5))
for i in range(len(loop_oscs)):
fidx = fs[i] <= 100
plt.semilogy(fs[i][fidx], psds[i][fidx],
label=loop_period_stds[i])
plt.legend(title='period st. dev.')
cycle_features_use = {'amp_mean': 1, 'amp_burst_std': 0, 'amp_std': 0,
'period_mean': 100,
'period_burst_std': 0,
'period_std': 0,
'rdsym_mean': .3, 'rdsym_burst_std': 0, 'rdsym_std': None}
prob_enter_burst = 1
prob_leave_burst = 0
loop_rdsym_stds = [0, .01, .07]
loop_oscs = []
for rdsym_std in loop_rdsym_stds:
cycle_features_use['rdsym_std'] = rdsym_std
osc = neurodsp.sim_bursty_oscillator(freq, T, Fs,
cycle_features = cycle_features_use,
prob_enter_burst=prob_enter_burst,
prob_leave_burst=prob_leave_burst)
loop_oscs.append(osc)
t = np.arange(0,T,1/Fs)
tidx = t <= 2
plt.figure(figsize=(12,3))
for i in range(len(loop_oscs)):
plt.plot(t[tidx], loop_oscs[i][tidx] + i*2)
plt.xlim((0,2))
plt.xlabel('Time (s)')
plt.ylabel('Voltage')
fs, psds = [], []
for i in range(len(loop_oscs)):
f, psd = neurodsp.spectral.psd(loop_oscs[i], Fs)
fs.append(f)
psds.append(psd)
plt.figure(figsize=(10,5))
for i in range(len(loop_oscs)):
fidx = fs[i] <= 100
plt.semilogy(fs[i][fidx], psds[i][fidx],
label=loop_rdsym_stds[i])
plt.legend(title='rdsym st. dev.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Background
Step2: If a scalp electrode was used as reference but was not saved alongside the
Step3: By default,
Step4: .. KEEP THESE BLOCKS SEPARATE SO FIGURES ARE BIG ENOUGH TO READ
Step5: Notice that the new reference (EEG 050) is now flat, while the original
Step6: Creating the average reference as a projector
Step7: Creating the average reference as a projector has a few advantages
|
<ASSISTANT_TASK:>
Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
raw.pick(['EEG 0{:02}'.format(n) for n in range(41, 60)])
# code lines below are commented out because the sample data doesn't have
# earlobe or mastoid channels, so this is just for demonstration purposes:
# use a single channel reference (left earlobe)
# raw.set_eeg_reference(ref_channels=['A1'])
# use average of mastoid channels as reference
# raw.set_eeg_reference(ref_channels=['M1', 'M2'])
raw.plot()
# add new reference channel (all zero)
raw_new_ref = mne.add_reference_channels(raw, ref_channels=['EEG 999'])
raw_new_ref.plot()
# set reference to `EEG 050`
raw_new_ref.set_eeg_reference(ref_channels=['EEG 050'])
raw_new_ref.plot()
# use the average of all channels as reference
raw_avg_ref = raw.copy().set_eeg_reference(ref_channels='average')
raw_avg_ref.plot()
raw.set_eeg_reference('average', projection=True)
print(raw.info['projs'])
for title, proj in zip(['Original', 'Average'], [False, True]):
fig = raw.plot(proj=proj, n_channels=len(raw))
# make room for title
fig.subplots_adjust(top=0.9)
fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we wil setup logging
Step2: Now lets load a set of documents
Step3: This corpus contains 10 documents. Now lets say we would like to model this with DTM.
Step4: A simple corpus wrapper to load a premade corpus. You can use this with your own data.
Step5: So now we have to generate the path to DTM executable, here I have already set an ENV variable for the DTM_HOME
Step6: That is basically all we need to be able to invoke the Training.
Step7: If everything worked we should be able to print out the topics
Step8: Next, we'll attempt to find the Document-Topic proportions. We will use the gamma class variable of the model to do the same. Gamma is a matrix such that gamma[5,10] is the proportion of the 10th topic in document 5.
|
<ASSISTANT_TASK:>
Python Code:
import logging
import os
from gensim import corpora, utils
from gensim.models.wrappers.dtmmodel import DtmModel
import numpy as np
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
documents = [[u'senior', u'studios', u'studios', u'studios', u'creators', u'award', u'mobile', u'currently', u'challenges', u'senior', u'summary', u'senior', u'motivated', u'creative', u'senior', u'performs', u'engineering', u'tasks', u'infrastructure', u'focusing', u'primarily', u'programming', u'interaction', u'designers', u'engineers', u'leadership', u'teams', u'teams', u'crews', u'responsibilities', u'engineering', u'quality', u'functional', u'functional', u'teams', u'organizing', u'prioritizing', u'technical', u'decisions', u'engineering', u'participates', u'participates', u'reviews', u'participates', u'hiring', u'conducting', u'interviews', u'feedback', u'departments', u'define', u'focusing', u'engineering', u'teams', u'crews', u'facilitate', u'engineering', u'departments', u'deadlines', u'milestones', u'typically', u'spends', u'designing', u'developing', u'updating', u'bugs', u'mentoring', u'engineers', u'define', u'schedules', u'milestones', u'participating', u'reviews', u'interviews', u'sized', u'teams', u'interacts', u'disciplines', u'knowledge', u'skills', u'knowledge', u'knowledge', u'xcode', u'scripting', u'debugging', u'skills', u'skills', u'knowledge', u'disciplines', u'animation', u'networking', u'expertise', u'competencies', u'oral', u'skills', u'management', u'skills', u'proven', u'effectively', u'teams', u'deadline', u'environment', u'bachelor', u'minimum', u'shipped', u'leadership', u'teams', u'location', u'resumes', u'jobs', u'candidates', u'openings', u'jobs'], [u'maryland', u'client', u'producers', u'electricity', u'operates', u'storage', u'utility', u'retail', u'customers', u'engineering', u'consultant', u'maryland', u'summary', u'technical', u'technology', u'departments', u'expertise', u'maximizing', u'output', u'reduces', u'operating', u'participates', u'areas', u'engineering', u'conducts', u'testing', u'solve', u'supports', u'environmental', u'understands', u'objectives', u'operates', u'responsibilities', u'handles', u'complex', u'engineering', u'aspects', u'monitors', u'quality', u'proficiency', u'optimization', u'recommendations', u'supports', u'personnel', u'troubleshooting', u'commissioning', u'startup', u'shutdown', u'supports', u'procedure', u'operating', u'units', u'develops', u'simulations', u'troubleshooting', u'tests', u'enhancing', u'solving', u'develops', u'estimates', u'schedules', u'scopes', u'understands', u'technical', u'management', u'utilize', u'routine', u'conducts', u'hazards', u'utilizing', u'hazard', u'operability', u'methodologies', u'participates', u'startup', u'reviews', u'pssr', u'participate', u'teams', u'participate', u'regulatory', u'audits', u'define', u'scopes', u'budgets', u'schedules', u'technical', u'management', u'environmental', u'awareness', u'interfacing', u'personnel', u'interacts', u'regulatory', u'departments', u'input', u'objectives', u'identifying', u'introducing', u'concepts', u'solutions', u'peers', u'customers', u'coworkers', u'knowledge', u'skills', u'engineering', u'quality', u'engineering', u'commissioning', u'startup', u'knowledge', u'simulators', u'technologies', u'knowledge', u'engineering', u'techniques', u'disciplines', u'leadership', u'skills', u'proven', u'engineers', u'oral', u'skills', u'technical', u'skills', u'analytically', u'solve', u'complex', u'interpret', u'proficiency', u'simulation', u'knowledge', u'applications', u'manipulate', u'applications', u'engineering', u'calculations', u'programs', u'matlab', u'excel', u'independently', u'environment', u'proven', u'skills', u'effectively', u'multiple', u'tasks', u'planning', u'organizational', u'management', u'skills', u'rigzone', u'jobs', u'developer', u'exceptional', u'strategies', u'junction', u'exceptional', u'strategies', u'solutions', u'solutions', u'biggest', u'insurers', u'operates', u'investment'], [u'vegas', u'tasks', u'electrical', u'contracting', u'expertise', u'virtually', u'electrical', u'developments', u'institutional', u'utilities', u'technical', u'experts', u'relationships', u'credibility', u'contractors', u'utility', u'customers', u'customer', u'relationships', u'consistently', u'innovations', u'profile', u'construct', u'envision', u'dynamic', u'complex', u'electrical', u'management', u'grad', u'internship', u'electrical', u'engineering', u'infrastructures', u'engineers', u'documented', u'management', u'engineering', u'quality', u'engineering', u'electrical', u'engineers', u'complex', u'distribution', u'grounding', u'estimation', u'testing', u'procedures', u'voltage', u'engineering', u'troubleshooting', u'installation', u'documentation', u'bsee', u'certification', u'electrical', u'voltage', u'cabling', u'electrical', u'engineering', u'candidates', u'electrical', u'internships', u'oral', u'skills', u'organizational', u'prioritization', u'skills', u'skills', u'excel', u'cadd', u'calculation', u'autocad', u'mathcad', u'skills', u'skills', u'customer', u'relationships', u'solving', u'ethic', u'motivation', u'tasks', u'budget', u'affirmative', u'diversity', u'workforce', u'gender', u'orientation', u'disability', u'disabled', u'veteran', u'vietnam', u'veteran', u'qualifying', u'veteran', u'diverse', u'candidates', u'respond', u'developing', u'workplace', u'reflects', u'diversity', u'communities', u'reviews', u'electrical', u'contracting', u'southwest', u'electrical', u'contractors'], [u'intern', u'electrical', u'engineering', u'idexx', u'laboratories', u'validating', u'idexx', u'integrated', u'hardware', u'entails', u'planning', u'debug', u'validation', u'engineers', u'validation', u'methodologies', u'healthcare', u'platforms', u'brightest', u'solve', u'challenges', u'innovation', u'technology', u'idexx', u'intern', u'idexx', u'interns', u'supplement', u'interns', u'teams', u'roles', u'competitive', u'interns', u'idexx', u'interns', u'participate', u'internships', u'mentors', u'seminars', u'topics', u'leadership', u'workshops', u'relevant', u'planning', u'topics', u'intern', u'presentations', u'mixers', u'applicants', u'ineligible', u'laboratory', u'compliant', u'idexx', u'laboratories', u'healthcare', u'innovation', u'practicing', u'veterinarians', u'diagnostic', u'technology', u'idexx', u'enhance', u'veterinarians', u'efficiency', u'economically', u'idexx', u'worldwide', u'diagnostic', u'tests', u'tests', u'quality', u'headquartered', u'idexx', u'laboratories', u'employs', u'customers', u'qualifications', u'applicants', u'idexx', u'interns', u'potential', u'demonstrated', u'portfolio', u'recommendation', u'resumes', u'marketing', u'location', u'americas', u'verification', u'validation', u'schedule', u'overtime', u'idexx', u'laboratories', u'reviews', u'idexx', u'laboratories', u'nasdaq', u'healthcare', u'innovation', u'practicing', u'veterinarians'], [u'location', u'duration', u'temp', u'verification', u'validation', u'tester', u'verification', u'validation', u'middleware', u'specifically', u'testing', u'applications', u'clinical', u'laboratory', u'regulated', u'environment', u'responsibilities', u'complex', u'hardware', u'testing', u'clinical', u'analyzers', u'laboratory', u'graphical', u'interfaces', u'complex', u'sample', u'sequencing', u'protocols', u'developers', u'correction', u'tracking', u'tool', u'timely', u'troubleshoot', u'testing', u'functional', u'manual', u'automated', u'participate', u'ongoing', u'testing', u'coverage', u'planning', u'documentation', u'testing', u'validation', u'corrections', u'monitor', u'implementation', u'recurrence', u'operating', u'statistical', u'quality', u'testing', u'global', u'multi', u'teams', u'travel', u'skills', u'concepts', u'waterfall', u'agile', u'methodologies', u'debugging', u'skills', u'complex', u'automated', u'instrumentation', u'environment', u'hardware', u'mechanical', u'components', u'tracking', u'lifecycle', u'management', u'quality', u'organize', u'define', u'priorities', u'organize', u'supervision', u'aggressive', u'deadlines', u'ambiguity', u'analyze', u'complex', u'situations', u'concepts', u'technologies', u'verbal', u'skills', u'effectively', u'technical', u'clinical', u'diverse', u'strategy', u'clinical', u'chemistry', u'analyzer', u'laboratory', u'middleware', u'basic', u'automated', u'testing', u'biomedical', u'engineering', u'technologists', u'laboratory', u'technology', u'availability', u'click', u'attach'], [u'scientist', u'linux', u'asrc', u'scientist', u'linux', u'asrc', u'technology', u'solutions', u'subsidiary', u'asrc', u'engineering', u'technology', u'contracts', u'multiple', u'agencies', u'scientists', u'engineers', u'management', u'personnel', u'allows', u'solutions', u'complex', u'aeronautics', u'aviation', u'management', u'aviation', u'engineering', u'hughes', u'technical', u'technical', u'aviation', u'evaluation', u'engineering', u'management', u'technical', u'terminal', u'surveillance', u'programs', u'currently', u'scientist', u'travel', u'responsibilities', u'develops', u'technology', u'modifies', u'technical', u'complex', u'reviews', u'draft', u'conformity', u'completeness', u'testing', u'interface', u'hardware', u'regression', u'impact', u'reliability', u'maintainability', u'factors', u'standardization', u'skills', u'travel', u'programming', u'linux', u'environment', u'cisco', u'knowledge', u'terminal', u'environment', u'clearance', u'clearance', u'input', u'output', u'digital', u'automatic', u'terminal', u'management', u'controller', u'termination', u'testing', u'evaluating', u'policies', u'procedure', u'interface', u'installation', u'verification', u'certification', u'core', u'avionic', u'programs', u'knowledge', u'procedural', u'testing', u'interfacing', u'hardware', u'regression', u'impact', u'reliability', u'maintainability', u'factors', u'standardization', u'missions', u'asrc', u'subsidiaries', u'affirmative', u'employers', u'applicants', u'disability', u'veteran', u'technology', u'location', u'airport', u'bachelor', u'schedule', u'travel', u'contributor', u'management', u'asrc', u'reviews'], [u'technical', u'solarcity', u'niche', u'vegas', u'overview', u'resolving', u'customer', u'clients', u'expanding', u'engineers', u'developers', u'responsibilities', u'knowledge', u'planning', u'adapt', u'dynamic', u'environment', u'inventive', u'creative', u'solarcity', u'lifecycle', u'responsibilities', u'technical', u'analyzing', u'diagnosing', u'troubleshooting', u'customers', u'ticketing', u'console', u'escalate', u'knowledge', u'engineering', u'timely', u'basic', u'phone', u'functionality', u'customer', u'tracking', u'knowledgebase', u'rotation', u'configure', u'deployment', u'sccm', u'technical', u'deployment', u'deploy', u'hardware', u'solarcity', u'bachelor', u'knowledge', u'dell', u'laptops', u'analytical', u'troubleshooting', u'solving', u'skills', u'knowledge', u'databases', u'preferably', u'server', u'preferably', u'monitoring', u'suites', u'documentation', u'procedures', u'knowledge', u'entries', u'verbal', u'skills', u'customer', u'skills', u'competitive', u'solar', u'package', u'insurance', u'vacation', u'savings', u'referral', u'eligibility', u'equity', u'performers', u'solarcity', u'affirmative', u'diversity', u'workplace', u'applicants', u'orientation', u'disability', u'veteran', u'careerrookie'], [u'embedded', u'exelis', u'junction', u'exelis', u'embedded', u'acquisition', u'networking', u'capabilities', u'classified', u'customer', u'motivated', u'develops', u'tests', u'innovative', u'solutions', u'minimal', u'supervision', u'paced', u'environment', u'enjoys', u'assignments', u'interact', u'multi', u'disciplined', u'challenging', u'focused', u'embedded', u'developments', u'spanning', u'engineering', u'lifecycle', u'specification', u'enhancement', u'applications', u'embedded', u'freescale', u'applications', u'android', u'platforms', u'interface', u'customers', u'developers', u'refine', u'specifications', u'architectures', u'java', u'programming', u'scripts', u'python', u'debug', u'debugging', u'emulators', u'regression', u'revisions', u'specialized', u'setups', u'capabilities', u'subversion', u'technical', u'documentation', u'multiple', u'engineering', u'techexpousa', u'reviews'], [u'modeler', u'semantic', u'modeling', u'models', u'skills', u'ontology', u'resource', u'framework', u'schema', u'technologies', u'hadoop', u'warehouse', u'oracle', u'relational', u'artifacts', u'models', u'dictionaries', u'models', u'interface', u'specifications', u'documentation', u'harmonization', u'mappings', u'aligned', u'coordinate', u'technical', u'peer', u'reviews', u'stakeholder', u'communities', u'impact', u'domains', u'relationships', u'interdependencies', u'models', u'define', u'analyze', u'legacy', u'models', u'corporate', u'databases', u'architectural', u'alignment', u'customer', u'expertise', u'harmonization', u'modeling', u'modeling', u'consulting', u'stakeholders', u'quality', u'models', u'storage', u'agile', u'specifically', u'focus', u'modeling', u'qualifications', u'bachelors', u'accredited', u'modeler', u'encompass', u'evaluation', u'skills', u'knowledge', u'modeling', u'techniques', u'resource', u'framework', u'schema', u'technologies', u'unified', u'modeling', u'technologies', u'schemas', u'ontologies', u'sybase', u'knowledge', u'skills', u'interpersonal', u'skills', u'customers', u'clearance', u'applicants', u'eligibility', u'classified', u'clearance', u'polygraph', u'techexpousa', u'solutions', u'partnership', u'solutions', u'integration'], [u'technologies', u'junction', u'develops', u'maintains', u'enhances', u'complex', u'diverse', u'intensive', u'analytics', u'algorithm', u'manipulation', u'management', u'documented', u'individually', u'reviews', u'tests', u'components', u'adherence', u'resolves', u'utilizes', u'methodologies', u'environment', u'input', u'components', u'hardware', u'offs', u'reuse', u'cots', u'gots', u'synthesis', u'components', u'tasks', u'individually', u'analyzes', u'modifies', u'debugs', u'corrects', u'integrates', u'operating', u'environments', u'develops', u'queries', u'databases', u'repositories', u'recommendations', u'improving', u'documentation', u'develops', u'implements', u'algorithms', u'functional', u'assists', u'developing', u'executing', u'procedures', u'components', u'reviews', u'documentation', u'solutions', u'analyzing', u'conferring', u'users', u'engineers', u'analyzing', u'investigating', u'areas', u'adapt', u'hardware', u'mathematical', u'models', u'predict', u'outcome', u'implement', u'complex', u'database', u'repository', u'interfaces', u'queries', u'bachelors', u'accredited', u'substituted', u'bachelors', u'firewalls', u'ipsec', u'vpns', u'technology', u'administering', u'servers', u'apache', u'jboss', u'tomcat', u'developing', u'interfaces', u'firefox', u'internet', u'explorer', u'operating', u'mainframe', u'linux', u'solaris', u'virtual', u'scripting', u'programming', u'oriented', u'programming', u'ajax', u'script', u'procedures', u'cobol', u'cognos', u'fusion', u'focus', u'html', u'java', u'java', u'script', u'jquery', u'perl', u'visual', u'basic', u'powershell', u'cots', u'cots', u'oracle', u'apex', u'integration', u'competitive', u'package', u'bonus', u'corporate', u'equity', u'tuition', u'reimbursement', u'referral', u'bonus', u'holidays', u'insurance', u'flexible', u'disability', u'insurance', u'technologies', u'disability', u'accommodation', u'recruiter', u'techexpousa']]
time_seq = [3, 7] # first 3 documents are from time slice one
# and the other 7 are from the second time slice.
class DTMcorpus(corpora.textcorpus.TextCorpus):
def get_texts(self):
return self.input
def __len__(self):
return len(self.input)
corpus = DTMcorpus(documents)
# path to dtm home folder
dtm_home = os.environ.get('DTM_HOME', "dtm-master")
# path to the binary. on my PC the executable file is dtm-master/bin/dtm
dtm_path = os.path.join(dtm_home, 'bin', 'dtm') if dtm_home else None
# you can also copy the path down directly
dtm_path = "/home/bhargav/dtm/main"
model = DtmModel(dtm_path, corpus, time_seq, num_topics=2,
id2word=corpus.dictionary, initialize_lda=True)
topics = model.show_topic(topicid=1, time=1, topn=10)
topics
doc_number = 1
num_topics = 2
for i in range(0,num_topics):
print "Distribution of Topic %d %f" % (i,model.gamma_[doc_number,i])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Step 5
Step5: Step 6
Step6: Step 7
Step7: Step 8
Step8: Step 9
|
<ASSISTANT_TASK:>
Python Code:
from nltk.tokenize.punkt import PunktSentenceTokenizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
import networkx as nx
import re
import urllib2
from bs4 import BeautifulSoup
import pandas as pd
# -*- coding: utf-8 -*-
urls = []
#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1994795/
urls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=1994795')
#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC314300/
urls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=314300')
#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4383356/
urls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4383356')
#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4596899/
urls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4596899')
#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4303126/
urls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4303126')
#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4637461/
urls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4637461')
#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4690355/
urls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4690355')
#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3505152/
urls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=3505152')
#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3976810/
urls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=3976810')
#https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4061037/
urls.append('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=4061037')
documents = []
abstracts = []
texts = []
print 'Preprocessing documents. This may take few minutes ...'
for i, url in enumerate(urls):
print 'Preprocessing document %d ...' % (i+1)
# Download the document
my_url = urllib2.urlopen(url)
raw_doc = BeautifulSoup(my_url.read(), 'xml')
documents.append(raw_doc)
# Extract the cleaned abstract
raw_abstract = raw_doc.abstract
my_abstract = re.sub(r'<\/?\w+>', r' ', str(raw_abstract)) # remove xml tags
abstracts.append(my_abstract)
# Extract the cleaned text
text = raw_doc.body
text = re.sub(r'\\n', r' ', str(text)) # remove newline characters
text = re.sub(r'<[^>]+>', r' ', str(text)) # remove xml tags
text = re.sub(r'\[[^\[^\]]+\]', r' ', str(text)) # remove references
text = re.sub(r'\[', r' ', str(text)) # remove any remaining [
text = re.sub(r'\]', r' ', str(text)) # remove any remaining ]
text = re.sub(r'[\s]{2,}', r' ', str(text)) # remove more than a single blank space
text = re.sub(r'\.\s+,\s+\S', r' ', str(text)) # remove , after a period
text = text.decode('utf-8')
texts.append(text)
print 'All documents preprocessed successfully.'
print 'We have %d documents with %d abstracts and %d texts.' % (len(documents), len(abstracts), len(texts))
assert len(documents) == len(abstracts)
assert len(documents) == len(texts)
punkttokenizer = PunktSentenceTokenizer()
text_sentences = []
for text in texts:
sentences = []
seen = set()
for sentence in punkttokenizer.tokenize(text):
sentences.append(sentence)
text_sentences.append(sentences)
tf_matrices = []
tfidf_matrices = []
cosine_similarity_matrices = []
print 'Calculating sentence simiarities. This may take few minutes ...'
for i, sentences in enumerate(text_sentences):
print 'Calculating sentence simiarities of document %d ...' % (i+1)
tf_matrix = CountVectorizer().fit_transform(sentences)
tf_matrices.append(tf_matrix)
tfidf_matrix = TfidfTransformer().fit_transform(tf_matrix)
tfidf_matrices.append(tfidf_matrix)
cosine_similarity_matrix = tfidf_matrix * tfidf_matrix.T
cosine_similarity_matrices.append(cosine_similarity_matrix)
print 'All documents processed successfully.'
print 'We have %d documents with %d tf_matrices %d tfidf_matrices and %d cosine_similarity_matrices.' \
% (len(documents), len(tf_matrices), len(tfidf_matrices), len(cosine_similarity_matrices))
assert len(documents) == len(tf_matrices)
assert len(documents) == len(tfidf_matrices)
assert len(documents) == len(cosine_similarity_matrices)
similarity_graphs = []
graph_ranks = []
highest_ranks = []
lowest_ranks = []
print 'Calculating TextRanks. This may take few minutes ...'
for i, cosine_similarity_matrix in enumerate(cosine_similarity_matrices):
print 'Calculating TextRanks of document %d ...' % (i+1)
similarity_graph = nx.from_scipy_sparse_matrix(cosine_similarity_matrix)
similarity_graphs.append(similarity_graph)
ranks = nx.pagerank(similarity_graph)
graph_ranks.append(ranks)
highest = sorted(((ranks[j],s) for j,s in enumerate(text_sentences[i])), reverse=True)
highest_ranks.append(highest)
lowest = sorted(((ranks[j],s) for j,s in enumerate(text_sentences[i])), reverse=False)
lowest_ranks.append(lowest)
print 'All documents processed successfully.'
print 'We have %d documents with %d similarity_graphs %d graph_ranks and %d highest_ranks.' \
% (len(documents), len(similarity_graphs), len(graph_ranks), len(highest_ranks))
assert len(documents) == len(similarity_graphs)
assert len(documents) == len(graph_ranks)
assert len(documents) == len(highest_ranks)
print 'Saving extractive summaries. This may take a few minutes ...'
for i, highest in enumerate(highest_ranks):
print 'Writing extractive summary for document %d ...' % (i+1)
out_file = '\\TextRank\\system\\article%d_system1.txt' % (i+1)
with open(out_file, 'w') as f:
for i in range(5):
f.write((highest[i][1] + '\n').encode('utf-8'))
print 'All documents processed successfully.'
print 'Saving ground truths. This may take a few minutes ...'
for i, abstract in enumerate(abstracts):
print 'Writing ground truth for document %d ...' % (i+1)
out_file = '\\TextRank\\reference\\article%d_reference1.txt' % (i+1)
with open(out_file, 'w') as f:
f.write(abstract.strip() + '\n')
print 'All documents processed successfully.'
%cd C:\ROUGE
!java -jar rouge2.0_0.2.jar
df = pd.read_csv('results.csv')
print df.sort_values('Avg_F-Score', ascending=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the kernel
Step2: Set up your Google Cloud project
Step3: Otherwise, set your project ID here.
Step4: Authenticate your Google Cloud account
Step5: Import libraries and define constants
Step6: Create Feature Store Resources
Step7: Create Entity Types
Step8: Create Features
Step9: Ingest Feature Values into Entity Type from a Pandas DataFrame
Step10: Load Avro Files into Pandas DataFrames
Step11: Ingest Feature Values into Users Entity Type
Step12: Ingest Feature Values into Movies Entity Type
Step13: Read/Online Serve Entity's Feature Values from Vertex AI Online Feature Store
Step14: Batch Serve Featurestore's Feature Values from Vertex AI Feature Store
Step15: Load Csv File into a Pandas DataFrame
Step16: Change the Dtype of Timestamp to Datetime64
Step17: Batch Serve Feature Values from Movie Predictions Featurestore
Step18: Read the Updated Feature Values
Step19: Ingest Updated Feature Values
Step20: Read from the Entity Type Shows Updated Feature Values from the Latest Ingestion
Step21: Point-in-Time Correctness
Step22: Backfill/Correct Point-in-Time Data
Step23: Ingest Backfill/Correct Point-in-Time Data
Step24: Batch Serve From the Latest Ingestion with Backfill/Correction Has Reduced Missing Data
Step25: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip uninstall {USER_FLAG} -y google-cloud-aiplatform
! pip uninstall {USER_FLAG} -y google-cloud-bigquery
! pip uninstall {USER_FLAG} -y google-cloud-bigquery-storage
! pip uninstall {USER_FLAG} -y google-cloud-aiplatform
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
! pip install {USER_FLAG} --upgrade google-cloud-bigquery
! pip install {USER_FLAG} --upgrade google-cloud-bigquery-storage
! pip install {USER_FLAG} avro
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "" # @param {type:"string"}
print("Project ID: ", PROJECT_ID)
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
import datetime
import pandas as pd
from google.cloud import aiplatform
REGION = "" # @param {type:"string"}
aiplatform.init(project=PROJECT_ID, location=REGION)
movie_predictions_feature_store = aiplatform.Featurestore.create(
featurestore_id="movie_predictions",
online_store_fixed_node_count=1,
)
users_entity_type = movie_predictions_feature_store.create_entity_type(
entity_type_id="users",
description="Users entity",
)
movies_entity_type = movie_predictions_feature_store.create_entity_type(
entity_type_id="movies",
description="Movies entity",
)
users_feature_age = users_entity_type.create_feature(
feature_id="age",
value_type="INT64",
description="User age",
)
users_feature_gender = users_entity_type.create_feature(
feature_id="gender",
value_type="STRING",
description="User gender",
)
users_feature_liked_genres = users_entity_type.create_feature(
feature_id="liked_genres",
value_type="STRING_ARRAY",
description="An array of genres this user liked",
)
movies_feature_configs = {
"title": {
"value_type": "STRING",
"description": "The title of the movie",
},
"genres": {
"value_type": "STRING",
"description": "The genre of the movie",
},
"average_rating": {
"value_type": "DOUBLE",
"description": "The average rating for the movie, range is [1.0-5.0]",
},
}
movie_features = movies_entity_type.batch_create_features(
feature_configs=movies_feature_configs,
)
GCS_USERS_AVRO_URI = (
"gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/users.avro"
)
GCS_MOVIES_AVRO_URI = (
"gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/movies.avro"
)
USERS_AVRO_FN = "users.avro"
MOVIES_AVRO_FN = "movies.avro"
! gsutil cp $GCS_USERS_AVRO_URI $USERS_AVRO_FN
! gsutil cp $GCS_MOVIES_AVRO_URI $MOVIES_AVRO_FN
from avro.datafile import DataFileReader
from avro.io import DatumReader
class AvroReader:
def __init__(self, data_file):
self.avro_reader = DataFileReader(open(data_file, "rb"), DatumReader())
def to_dataframe(self):
records = [record for record in self.avro_reader]
return pd.DataFrame.from_records(data=records)
users_avro_reader = AvroReader(data_file=USERS_AVRO_FN)
users_source_df = users_avro_reader.to_dataframe()
print(users_source_df)
movies_avro_reader = AvroReader(data_file=MOVIES_AVRO_FN)
movies_source_df = movies_avro_reader.to_dataframe()
print(movies_source_df)
users_entity_type.ingest_from_df(
feature_ids=["age", "gender", "liked_genres"],
feature_time="update_time",
df_source=users_source_df,
entity_id_field="user_id",
)
movies_entity_type.ingest_from_df(
feature_ids=["average_rating", "title", "genres"],
feature_time="update_time",
df_source=movies_source_df,
entity_id_field="movie_id",
)
users_read_df = users_entity_type.read(
entity_ids=["dave", "alice", "charlie", "bob", "eve"],
)
print(users_read_df)
movies_read_df = movies_entity_type.read(
entity_ids=["movie_01", "movie_02", "movie_03", "movie_04"],
feature_ids=["title", "genres", "average_rating"],
)
print(movies_read_df)
GCS_READ_INSTANCES_CSV_URI = "gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/movie_prediction.csv"
! gsutil cp $GCS_READ_INSTANCES_CSV_URI $READ_INSTANCES_CSV_FN
read_instances_df = pd.read_csv(read_instances_csv_fn)
print(read_instances_df)
print("before: ", read_instances_df["timestamp"].dtype)
read_instances_df = read_instances_df.astype({"timestamp": "datetime64"})
print("after: ", read_instances_df["timestamp"].dtype)
movie_predictions_df = movie_predictions_feature_store.batch_serve_to_df(
serving_feature_ids={
"users": ["age", "gender", "liked_genres"],
"movies": ["title", "average_rating", "genres"],
},
read_instances_df=read_instances_df,
)
movie_predictions_df
print(movies_read_df)
update_movies_df = pd.DataFrame(
data=[["movie_03", 4.3], ["movie_04", 4.8]],
columns=["movie_id", "average_rating"],
)
print(update_movies_df)
movies_entity_type.ingest_from_df(
feature_ids=["average_rating"],
feature_time=datetime.datetime.now(),
df_source=update_movies_df,
entity_id_field="movie_id",
)
update_movies_read_df = movies_entity_type.read(
entity_ids=["movie_01", "movie_02", "movie_03", "movie_04"],
feature_ids=["title", "genres", "average_rating"],
)
print(update_movies_read_df)
print(movie_predictions_df)
backfill_users_df = pd.DataFrame(
data=[["bob", 34, "Male", ["Drama"], "2020-02-13 09:35:15"]],
columns=["user_id", "age", "gender", "liked_genres", "update_time"],
)
backfill_users_df = backfill_users_df.astype({"update_time": "datetime64"})
print(backfill_users_df)
backfill_movies_df = pd.DataFrame(
data=[["movie_04", 4.2, "The Dark Knight", "Action", "2020-02-13 09:35:15"]],
columns=["movie_id", "average_rating", "title", "genres", "update_time"],
)
backfill_movies_df = backfill_movies_df.astype({"update_time": "datetime64"})
print(backfill_movies_df)
users_entity_type.ingest_from_df(
feature_ids=["age", "gender", "liked_genres"],
feature_time="update_time",
df_source=backfill_users_df,
entity_id_field="user_id",
)
movies_entity_type.ingest_from_df(
feature_ids=["average_rating", "title", "genres"],
feature_time="update_time",
df_source=backfill_movies_df,
entity_id_field="movie_id",
)
backfill_movie_predictions_df = movie_predictions_feature_store.batch_serve_to_df(
serving_feature_ids={
"users": ["age", "gender", "liked_genres"],
"movies": ["title", "average_rating", "genres"],
},
read_instances_df=read_instances_df,
)
print(backfill_movie_predictions_df)
movie_predictions_feature_store.delete(force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the model parameters and metadata¶
Step2: Trying it out
Step3: On some test images from the web
Step4: Process test images and print top 5 predicted labels¶
|
<ASSISTANT_TASK:>
Python Code:
import lasagne
from lasagne.layers import InputLayer
from lasagne.layers import Conv2DLayer, Pool2DLayer
from lasagne.layers import DenseLayer
from lasagne.layers import GlobalPoolLayer
from lasagne.layers import ConcatLayer
from lasagne.layers.normalization import batch_norm
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def bn_conv(input_layer, **kwargs):
l = Conv2DLayer(input_layer, **kwargs)
l = batch_norm(l, epsilon=0.001)
return l
def inceptionA(input_layer, nfilt):
# Corresponds to a modified version of figure 5 in the paper
l1 = bn_conv(input_layer, num_filters=nfilt[0][0], filter_size=1)
l2 = bn_conv(input_layer, num_filters=nfilt[1][0], filter_size=1)
l2 = bn_conv(l2, num_filters=nfilt[1][1], filter_size=5, pad=2)
l3 = bn_conv(input_layer, num_filters=nfilt[2][0], filter_size=1)
l3 = bn_conv(l3, num_filters=nfilt[2][1], filter_size=3, pad=1)
l3 = bn_conv(l3, num_filters=nfilt[2][2], filter_size=3, pad=1)
l4 = Pool2DLayer(input_layer, pool_size=3, stride=1, pad=1, mode='average_exc_pad')
l4 = bn_conv(l4, num_filters=nfilt[3][0], filter_size=1)
return ConcatLayer([l1, l2, l3, l4])
def inceptionB(input_layer, nfilt):
# Corresponds to a modified version of figure 10 in the paper
l1 = bn_conv(input_layer, num_filters=nfilt[0][0], filter_size=3, stride=2)
l2 = bn_conv(input_layer, num_filters=nfilt[1][0], filter_size=1)
l2 = bn_conv(l2, num_filters=nfilt[1][1], filter_size=3, pad=1)
l2 = bn_conv(l2, num_filters=nfilt[1][2], filter_size=3, stride=2)
l3 = Pool2DLayer(input_layer, pool_size=3, stride=2)
return ConcatLayer([l1, l2, l3])
def inceptionC(input_layer, nfilt):
# Corresponds to figure 6 in the paper
l1 = bn_conv(input_layer, num_filters=nfilt[0][0], filter_size=1)
l2 = bn_conv(input_layer, num_filters=nfilt[1][0], filter_size=1)
l2 = bn_conv(l2, num_filters=nfilt[1][1], filter_size=(1, 7), pad=(0, 3))
l2 = bn_conv(l2, num_filters=nfilt[1][2], filter_size=(7, 1), pad=(3, 0))
l3 = bn_conv(input_layer, num_filters=nfilt[2][0], filter_size=1)
l3 = bn_conv(l3, num_filters=nfilt[2][1], filter_size=(7, 1), pad=(3, 0))
l3 = bn_conv(l3, num_filters=nfilt[2][2], filter_size=(1, 7), pad=(0, 3))
l3 = bn_conv(l3, num_filters=nfilt[2][3], filter_size=(7, 1), pad=(3, 0))
l3 = bn_conv(l3, num_filters=nfilt[2][4], filter_size=(1, 7), pad=(0, 3))
l4 = Pool2DLayer(input_layer, pool_size=3, stride=1, pad=1, mode='average_exc_pad')
l4 = bn_conv(l4, num_filters=nfilt[3][0], filter_size=1)
return ConcatLayer([l1, l2, l3, l4])
def inceptionD(input_layer, nfilt):
# Corresponds to a modified version of figure 10 in the paper
l1 = bn_conv(input_layer, num_filters=nfilt[0][0], filter_size=1)
l1 = bn_conv(l1, num_filters=nfilt[0][1], filter_size=3, stride=2)
l2 = bn_conv(input_layer, num_filters=nfilt[1][0], filter_size=1)
l2 = bn_conv(l2, num_filters=nfilt[1][1], filter_size=(1, 7), pad=(0, 3))
l2 = bn_conv(l2, num_filters=nfilt[1][2], filter_size=(7, 1), pad=(3, 0))
l2 = bn_conv(l2, num_filters=nfilt[1][3], filter_size=3, stride=2)
l3 = Pool2DLayer(input_layer, pool_size=3, stride=2)
return ConcatLayer([l1, l2, l3])
def inceptionE(input_layer, nfilt, pool_mode):
# Corresponds to figure 7 in the paper
l1 = bn_conv(input_layer, num_filters=nfilt[0][0], filter_size=1)
l2 = bn_conv(input_layer, num_filters=nfilt[1][0], filter_size=1)
l2a = bn_conv(l2, num_filters=nfilt[1][1], filter_size=(1, 3), pad=(0, 1))
l2b = bn_conv(l2, num_filters=nfilt[1][2], filter_size=(3, 1), pad=(1, 0))
l3 = bn_conv(input_layer, num_filters=nfilt[2][0], filter_size=1)
l3 = bn_conv(l3, num_filters=nfilt[2][1], filter_size=3, pad=1)
l3a = bn_conv(l3, num_filters=nfilt[2][2], filter_size=(1, 3), pad=(0, 1))
l3b = bn_conv(l3, num_filters=nfilt[2][3], filter_size=(3, 1), pad=(1, 0))
l4 = Pool2DLayer(input_layer, pool_size=3, stride=1, pad=1, mode=pool_mode)
l4 = bn_conv(l4, num_filters=nfilt[3][0], filter_size=1)
return ConcatLayer([l1, l2a, l2b, l3a, l3b, l4])
def build_network():
net = {}
net['input'] = InputLayer((None, 3, 299, 299))
net['conv'] = bn_conv(net['input'], num_filters=32, filter_size=3, stride=2)
net['conv_1'] = bn_conv(net['conv'], num_filters=32, filter_size=3)
net['conv_2'] = bn_conv(net['conv_1'], num_filters=64, filter_size=3, pad=1)
net['pool'] = Pool2DLayer(net['conv_2'], pool_size=3, stride=2, mode='max')
net['conv_3'] = bn_conv(net['pool'], num_filters=80, filter_size=1)
net['conv_4'] = bn_conv(net['conv_3'], num_filters=192, filter_size=3)
net['pool_1'] = Pool2DLayer(net['conv_4'], pool_size=3, stride=2, mode='max')
net['mixed/join'] = inceptionA(
net['pool_1'], nfilt=((64,), (48, 64), (64, 96, 96), (32,)))
net['mixed_1/join'] = inceptionA(
net['mixed/join'], nfilt=((64,), (48, 64), (64, 96, 96), (64,)))
net['mixed_2/join'] = inceptionA(
net['mixed_1/join'], nfilt=((64,), (48, 64), (64, 96, 96), (64,)))
net['mixed_3/join'] = inceptionB(
net['mixed_2/join'], nfilt=((384,), (64, 96, 96)))
net['mixed_4/join'] = inceptionC(
net['mixed_3/join'],
nfilt=((192,), (128, 128, 192), (128, 128, 128, 128, 192), (192,)))
net['mixed_5/join'] = inceptionC(
net['mixed_4/join'],
nfilt=((192,), (160, 160, 192), (160, 160, 160, 160, 192), (192,)))
net['mixed_6/join'] = inceptionC(
net['mixed_5/join'],
nfilt=((192,), (160, 160, 192), (160, 160, 160, 160, 192), (192,)))
net['mixed_7/join'] = inceptionC(
net['mixed_6/join'],
nfilt=((192,), (192, 192, 192), (192, 192, 192, 192, 192), (192,)))
net['mixed_8/join'] = inceptionD(
net['mixed_7/join'],
nfilt=((192, 320), (192, 192, 192, 192)))
net['mixed_9/join'] = inceptionE(
net['mixed_8/join'],
nfilt=((320,), (384, 384, 384), (448, 384, 384, 384), (192,)),
pool_mode='average_exc_pad')
net['mixed_10/join'] = inceptionE(
net['mixed_9/join'],
nfilt=((320,), (384, 384, 384), (448, 384, 384, 384), (192,)),
pool_mode='max')
net['pool3'] = GlobalPoolLayer(net['mixed_10/join'])
net['softmax'] = DenseLayer(net['pool3'], num_units=1008, nonlinearity=lasagne.nonlinearities.softmax)
return net
net = build_network()
output_layer = net['softmax']
print("Defined Inception3 model")
import pickle
params = pickle.load(open('./data/inception3/inception_v3.pkl', 'rb'), encoding='iso-8859-1')
#print("Saved model params.keys = ", params.keys())
#print(" License : "+params['LICENSE']) # Apache 2.0
classes = params['synset words']
lasagne.layers.set_all_param_values(output_layer, params['param values'])
print("Loaded Model")
from model import inception_v3 # This is for image preprocessing functions
image_files = [
'./images/grumpy-cat_224x224.jpg',
'./images/sad-owl_224x224.jpg',
'./images/cat-with-tongue_224x224.jpg',
'./images/doge-wiki_224x224.jpg',
]
import time
t0 = time.time()
for i, f in enumerate(image_files):
#print("Image File:%s" % (f,))
im = inception_v3.imagefile_to_np(f)
prob = np.array( lasagne.layers.get_output(output_layer, inception_v3.preprocess(im), deterministic=True).eval() )
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(im.astype('uint8'))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(350, 50 + n * 25, '{}. {}'.format(n+1, classes[label]), fontsize=14)
print("DONE : %6.2f seconds each" %(float(time.time() - t0)/len(image_files),))
import requests
index = requests.get('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').text
image_urls = index.split('<br>')
np.random.seed(23)
np.random.shuffle(image_urls)
image_urls = image_urls[:5]
image_urls
import io
for url in image_urls:
try:
ext = url.split('.')[-1]
im = plt.imread(io.BytesIO(requests.get(url).content), ext)
prob = np.array( lasagne.layers.get_output(output_layer, inception_v3.preprocess(im), deterministic=True).eval() )
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(inception_v3.resize_image(im))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(350, 50 + n * 25, '{}. {}'.format(n+1, classes[label]), fontsize=14)
except IOError:
print('bad url: ' + url)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read data files to dict
Step2: EDA
Step3: Batting
Step4: Pair Plots
|
<ASSISTANT_TASK:>
Python Code:
e = np.random.randn(50)
w = 3
x = np.random.rand(50)*np.random.randint(0,10,50)
y = w*x + 2*e
x
x[41], y[41]
sns.regplot(x, y, ci=False)
plt.plot((x[25], x[25]), (13, y[25]-0.3), 'r:');
plt.plot((x[41], x[41]), (y[41]+0.3, 14.5), 'r:');
import pandas as pd
from pandas import DataFrame as DF, Series
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
gnbu = sns.color_palette('GnBu', 40)
sns.set_style('whitegrid')
sets = ['batting',
'pitching',
'player',
'salary']
data = {}
for s in sets:
file = s + '.csv'
data[s] = pd.read_csv(file)
def summary(data):
# data info
print('DATA INFO \n')
data.info()
print(50*'-', '\n')
# numeric summary
print('NUMERIC \n')
print(data.describe().T)
print(50*'-', '\n')
# categorical summary
print('CATEGORICAL \n')
print(data.describe(include=['O']).T)
# import pandas scatter_matrix to create pair-plots
from pandas.plotting import scatter_matrix
summary(data['batting'])
# features to plot
cols = list(data['batting'].loc[:1, 'r':'rbi'].columns) + ['team_id']
cols
teams = data['batting'].team_id.value_counts().head(3).index
colors = {teams[0]: 'r', teams[1]: 'g', teams[2]: 'b'}
subset = data['batting'][data['batting'].team_id.isin(teams)][cols].sample(frac=0.15)
teams
scatter_matrix(subset, c=subset.team_id.apply(lambda x: colors[x]), alpha=0.5, figsize=(14,14));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 6.a
Step2: Part 6.b.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as sig
import control
import plot_learning_curve as plc
num_failures, time_steps_to_failure = control.simulate()
print(num_failures)
plot = plc.plot_learning_curve(time_steps_to_failure[:num_failures])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1) Obtain and Examine Training Data
Step3: While it is possible to look up each of the names of the $r$-band magnitudes in the SDSS PhotoObjAll schema, the schema list is long, and thus there is a chance of missing one column. Better to identify the desired columns using the database itself
Step4: To reiterate a point from above
Step5: Problem 1b Can the acquired training set be used to separate stars and galaxies? [Write your reasoning below]
Step6: Problem 2) Feature Engineering
Step7: Not bad for a model with no optimization!
Step8: One advantage of the SDSS model is that the magnitude measurements have been normalized. The SDSS spectroscopic training set is biased towards stars at bright the end, and towards galaxies at the faint end. (This can be quickly confirmed by plotting histograms, or better yet KDEs, of each.) While the universe is biased in this way, it is not clear that the SDSS targeting function properly captured this bias. Thus, we will attempt to engineer the features to remove this bias (similar to SDSS).
Step9: Problem 2d How does the accuracy of each of these 3 models compare?
Step10: In some cases, the final step in feature engineering is identifying the useful features and removing those that are noisy and/or provide no information. Again, RF is relatively immune to these problems, and in this case (7 features) there are few features. Thus, we do not need to remove any features. Nevertheless, we see that the psfMag_r -fiber2Mag_r feature has very little importance. This is due to the high correlation between fiberMag_r and fiber2Mag_r. It is likely we could remove one of these features without harming the model. We could even measure the model improvement (if any) via cross-validation or the validation set.
Step11: Now, let's try to develop some intuition for the effects of changing the tuning parameters.
Step12: From the above results, we can see that 1 tree is likely too few, while the results seem to be stabilizing with $N_\mathrm{tree} \gtrsim 10$. So - how does one actually determine the optimal set of tuning parameters? Brute force. At least in our case we will use brute force, as the data set and number of tuning parameters is small (there are alternative approaches that are less time consuming when this isn't the case). We will perform a grid search over the three tuning parameters, measuring the CV accuracy at each point within the 3D grid, and picking the point with the highest accuracy. Two general rules of thumb
Step13: Problem 4) Evaluating the Accuracy of the Model
Step14: A standard method for evaluating classification models is the confusion matrix, which summarizes both the accuracy for each individual class, as well the most-likely misclassifications for each class. [For the 2 class problem, such as ours, these quantities are essentially one and the same, but for multi-class problems this is highly helpful.] In examining the matrix, "confusion" for the classifier is summarized. In a confusion matrix, one axis shows the true class and the other shows the predicted class. For a perfect classifier all of the power will be along the diagonal, while confusion is represented by off-diagonal signal.
Step15: Visual representations are often the most useful means for interpreting the confusion matrix (this is especially true for multiclass problems).
Step16: Note - if your plot looks a little funny, that is likely because you imported seaborne earlier. Remember, import seaborne also changes defaults, but you can change the default appearance of seaborne.
Step17: The ROC curve should come very close to (0, 1), which is the sign of a good model (and with ~97% accuracy we already knew we have a pretty good model). We will now explore the results of thresholding. Suppose you are searching for supernovae in a time domain survey, and it is essential you examine transients in as many galaxies as possible. While searching, there will be many false positives in the form of stars, thus, you need a model that rejects stars (high TPR) with a very low FPR (does not misclassify galaxies).
Step19: That's it! You now have the basic outline necessary to create a full-blown, publication-ready machine-learning model.
Step20: Challenge 1 Calculate the accuracy with which the model classifies QSOs based on the 10k QSOs selected with the above command. How does that accuracy compare to that estimated by the validation set?
Step22: Challenge 2 Can you think of any reasons why the performance would be so much worse for the QSOs than it is for the stars? Can you obtain a ~.97 accuracy when classifying QSOs?
Step23: Uber Challenge Problem) The 3-class problem
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from astropy.table import Table
import matplotlib.pyplot as plt
%matplotlib inline
# execute this cell
from astroquery.sdss import SDSS # enables direct queries to the SDSS database
TSquery = SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
ORDER BY p.objid ASC
SDSSts = SDSS.query_sql(TSquery)
SDSSts
import seaborn as sns
sns.pairplot(SDSSts.to_pandas(), hue = 'class', diag_kind = 'kde')
from sklearn.cross_validation import train_test_split
rs = 23 # we are in Chicago after all
feats = list(SDSSts.columns)
feats.remove('class')
X = np.array(SDSSts[feats].to_pandas())
y = np.array(SDSSts['class'])
train_X, val_X, train_y, val_y = train_test_split( X, y, test_size = 0.3, random_state = rs)
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
RFmod = RandomForestClassifier(n_estimators = 50)
RFmod.fit(train_X, train_y)
val_preds = RFmod.predict(val_X)
print("The raw features produce a model with accuracy ~{:.4f}".format(accuracy_score(val_y, val_preds)))
SDSSpred = SDSSts['psfMag_r'] - SDSSts['cModelMag_r'] > 0.145
SDSSphot_class = np.empty(len(SDSSpred), dtype = '|S6')
SDSSphot_class[SDSSpred] = 'GALAXY'
SDSSphot_class[SDSSpred == False] = 'STAR'
print("The SDSS phot model produces an accuracy ~{:.4f}".format(accuracy_score(SDSSts['class'], SDSSphot_class)))
train_Xnorm = train_X[:,0][:,np.newaxis] - train_X[:,1:]
val_Xnorm = val_X[:,0][:,np.newaxis] - val_X[:,1:]
RFmod = RandomForestClassifier(n_estimators = 50)
RFmod.fit(train_Xnorm, train_y)
val_predsNorm = RFmod.predict(val_Xnorm)
print("The normalized features produce a model with accuracy ~{:.4f}".format(accuracy_score(val_y, val_predsNorm)))
print(RFmod.feature_importances_) # print the importances
indicies = np.argsort(RFmod.feature_importances_)[::-1] # sort the features most imp. --> least imp.
# recall that all features are normalized relative to psfMag_r
featStr = ", \n".join(['psfMag_r - {:s}'.format(x) for x in list(np.array(feats)[1:][indicies])])
print("The relative importance of the features is: \n{:s}".format(featStr))
from sklearn.cross_validation import cross_val_score
CVscores = cross_val_score(RFmod, train_Xnorm, train_y, cv = 5)
print(CVscores)
print("The CV accuracy for the training set is {:.4f}".format(np.mean(CVscores)))
CVscores1 = cross_val_score(RandomForestClassifier(n_estimators = 1), train_Xnorm, train_y, cv = 5)
CVscores10 = cross_val_score(RandomForestClassifier(n_estimators = 10), train_Xnorm, train_y, cv = 5)
CVscores100 = cross_val_score(RandomForestClassifier(n_estimators = 100), train_Xnorm, train_y, cv = 5)
print("The CV accuracy for 1, 10, 100 trees is {:.4f}, {:.4f}, {:.4f}".format(np.mean(CVscores1), np.mean(CVscores10), np.mean(CVscores100)))
from sklearn.grid_search import GridSearchCV
grid_results = GridSearchCV(RandomForestClassifier(),
{'n_estimators': [30, 100, 300], 'max_features': [1, 3, 7], 'min_samples_leaf': [1,10]},
cv = 5)
grid_results.fit(train_Xnorm, train_y)
print("The optimal parameters are:")
for key, item in grid_results.best_params_.items(): # warning - slightly different meanings in Py2 & Py3
print("{}: {}".format(key, item))
RFmod = RandomForestClassifier(n_estimators=300, max_features=3, min_samples_leaf=10, n_jobs=-1).fit(train_Xnorm, train_y)
val_predsNorm = RFmod.predict(val_Xnorm)
print("The accuracy of the optimized model is ~{:.4f}".format(accuracy_score(val_y, val_predsNorm)))
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(val_y, val_predsNorm)
print(cm)
cmNorm = cm.astype(float)/cm.sum(axis = 1)[:,np.newaxis]
plt.imshow(cmNorm, interpolation = 'Nearest', cmap = 'Blues', vmin = 0, vmax = 1)
plt.colorbar()
tick_names = ["GALAXY", "STAR"]
tick_marks = np.arange(len(tick_names))
plt.xticks(tick_marks, tick_names, rotation=45)
plt.yticks(tick_marks, tick_names)
plt.ylabel('True label')
plt.xlabel('Predicted label')
from sklearn.metrics import roc_curve
train_yInt = np.zeros(len(train_y), dtype = int)
train_yInt[train_y == 'STAR'] = 1
val_yInt = np.zeros(len(val_y), dtype = int)
val_yInt[val_y == 'STAR'] = 1
RFmod = RandomForestClassifier(n_estimators=300, max_features=3, min_samples_leaf=10)
RFmod.fit(train_Xnorm, train_yInt)
val_predsProba = RFmod.predict_proba(val_Xnorm)
fpr, tpr, thresh = roc_curve(val_yInt, val_predsProba[:,1])
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
fpr01_idx = (np.abs(fpr-0.01)).argmin()
tpr01 = tpr[fpr01_idx]
thresh01 = thresh[fpr01_idx]
print("At FPR = 0.01, the TPR = {:.3f} corresponding to decision threshold = {:.3f}".format(tpr01, thresh01))
QSOquery = SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class = 'QSO'
ORDER BY s.specobjid ASC
QSOts = SDSS.query_sql(QSOquery)
qso_X = np.array(QSOts[feats].to_pandas())
qso_y = np.ones(len(QSOts)) # we are defining QSOs as stars for this exercise
qso_Xnorm = qso_X[:,0][:,np.newaxis] - qso_X[:,1:]
qso_preds = RFmod.predict(qso_Xnorm)
print("The RF model correctly classifies ~{:.4f} of the QSOs".format(accuracy_score(qso_y, qso_preds)))
# As discussed above, low-z AGN have resolved host galaxies which will confuse the classifier,
# this can be resolved by only selecting high-z QSOs (z > 1.5)
QSOquery = SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class = 'QSO'
AND s.z > 1.5
ORDER BY s.specobjid ASC
QSOts = SDSS.query_sql(QSOquery)
qso_X = np.array(QSOts[feats].to_pandas())
qso_y = np.ones(len(QSOts)) # we are defining QSOs as stars for this exercise
qso_Xnorm = qso_X[:,0][:,np.newaxis] - qso_X[:,1:]
qso_preds = RFmod.predict(qso_Xnorm)
print("The RF model correctly classifies ~{:.4f} of the QSOs".format(accuracy_score(qso_y, qso_preds)))
import # brokenBrain
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flexible Schema
Step2: projections
Step3: Getting Data into MongoDB
Step4: Using mongoimport
Step5: These operators can also be used with datetime objects
Step6: $regex
Step7: Querying Arrays Using Scalars
Step8: $all
Step9: updates
Step10: set unset
Step11: NOTE Need to remember that the second argument must have operator $set or $unset. If that is not there then the entire document found by the first one would be replace by what is passed as the second argument.
Step12: PROBLEM SET STARTS HERE
Step15: For this exercise, the arachnid data is already in the database. You have been
|
<ASSISTANT_TASK:>
Python Code:
import pprint
def get_client():
from pymongo import MongoClient
return MongoClient('mongodb://localhost:27017/')
def get_db():
# 'examples' here is the database name. It will be created if it does not exist.
db = get_client().examples
return db
def add_city(db):
db.cities.insert_one({"name" : "Chicago"})
def get_city(db):
return db.cities.find_one()
db = get_db()
#add_city(db)
for city in db.cities.find():
pprint.pprint(city)
#db.cities.insert_one({"name": "London", "isOld": True})
#db.cities.insert_one({"name": "New York", "isNew": True})
def find_in_cities(query):
for city in db.cities.find(query):
pprint.pprint(city)
find_in_cities({"name": "London", "isOld": False})
find_in_cities({"name": "London", "isOld": True})
query = {"name": "London", "isOld": True}
projection = {"_id": 0, "name": 1}
for city in db.cities.find(query, projection):
pprint.pprint(city)
db.cities.count()
list(city for city in db.cities.find())
from pymongo import MongoClient
import csv
import json
import io
import re
import pprint
field_map = {
"name" : "name",
"bodyStyle_label" : "bodyStyle",
"assembly_label" : "assembly",
"class_label" : "class",
"designer_label" : "designer",
"engine_label" : "engine",
"length" : "length",
"height" : "height",
"width" : "width",
"weight" : "weight",
"wheelbase" : "wheelbase",
"layout_label" : "layout",
"manufacturer_label" : "manufacturer",
"modelEndYear" : "modelEndYear",
"modelStartYear" : "modelStartYear",
"predecessorLabel" : "predecessorLabel",
"productionStartYear" : "productionStartYear",
"productionEndYear" : "productionEndYear",
"transmission" : "transmission"
}
fields = field_map.keys()
def skip_lines(input_file, skip):
for i in range(0, skip):
next(input_file)
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
def strip_automobile(v):
return re.sub(r"\s*\(automobile\)\s*", " ", v)
def strip_city(v):
return re.sub(r"\s*\(city\)\s*", " ", v)
def parse_array(v):
if (v[0] == "{") and (v[-1] == "}"):
v = v.lstrip("{")
v = v.rstrip("}")
v_array = v.split("|")
v_array = [i.strip() for i in v_array]
return v_array
return v
def mm_to_meters(v):
if v < 0.01:
return v * 1000
return v
def clean_dimension(d, field, v):
if is_number(v):
if field == "weight":
d[field] = float(v) / 1000.0
else:
d[field] = mm_to_meters(float(v))
def clean_year(d, field, v):
d[field] = v[0:4]
def parse_array2(v):
if (v[0] == "{") and (v[-1] == "}"):
v = v.lstrip("{")
v = v.rstrip("}")
v_array = v.split("|")
v_array = [i.strip() for i in v_array]
return (True, v_array)
return (False, v)
def ensure_not_array(v):
(is_array, v) = parse_array(v)
if is_array:
return v[0]
return v
def ensure_array(v):
(is_array, v) = parse_array2(v)
if is_array:
return v
return [v]
def ensure_float(v):
if is_number(v):
return float(v)
def ensure_int(v):
if is_number(v):
return int(v)
def ensure_year_array(val):
#print "val:", val
vals = ensure_array(val)
year_vals = []
for v in vals:
v = v[0:4]
v = int(v)
if v:
year_vals.append(v)
return year_vals
def empty_val(val):
val = val.strip()
return (val == "NULL") or (val == "")
def years(row, start_field, end_field):
start_val = row[start_field]
end_val = row[end_field]
if empty_val(start_val) or empty_val(end_val):
return []
start_years = ensure_year_array(start_val)
if start_years:
start_years = sorted(start_years)
end_years = ensure_year_array(end_val)
if end_years:
end_years = sorted(end_years)
all_years = []
if start_years and end_years:
#print start_years
#print end_years
for i in range(0, min(len(start_years), len(end_years))):
for y in range(start_years[i], end_years[i]+1):
all_years.append(y)
return all_years
def process_file_autos(input_file):
input_data = csv.DictReader(open(input_file))
autos = []
skip_lines(input_data, 3)
for row in input_data:
auto = {}
model_years = {}
production_years = {}
dimensions = {}
for field, val in row.iteritems():
if field not in fields or empty_val(val):
continue
if field in ["bodyStyle_label", "class_label", "layout_label"]:
val = val.lower()
val = strip_automobile(val)
val = strip_city(val)
val = val.strip()
val = parse_array(val)
if field in ["length", "width", "height", "weight", "wheelbase"]:
clean_dimension(dimensions, field_map[field], val)
elif field in ["modelStartYear", "modelEndYear"]:
clean_year(model_years, field_map[field], val)
elif field in ["productionStartYear", "productionEndYear"]:
clean_year(production_years, field_map[field], val)
else:
auto[field_map[field]] = val
if dimensions:
auto['dimensions'] = dimensions
auto['modelYears'] = years(row, 'modelStartYear', 'modelEndYear')
auto['productionYears'] = years(row, 'productionStartYear', 'productionEndYear')
autos.append(auto)
return autos
def insert_autos(infile, db):
data = process_file_autos(infile)
# Add your code here. Insert the data in one command.
db.autos.insert_many(data)
db = get_db()
#insert_autos('../autos.csv', db)
db.autos.find_one()
db.autos.count()
list(city for city in db.autos.find())
query = {'dimensions.weight': {"$gt": 2000}}
db.autos.count(query)
#using ASCII to query for string types
query = {'name': {"$gte": "H", "$lte": "Z"}}
print db.autos.count(query)
print db.autos.find_one(query)
def printCountAndOne(collection, query):
print "***"
print collection.count(query)
pprint.pprint(collection.find_one(query))
print db.autos.count()
printCountAndOne(db.autos, {'assembly': {"$exists": 1}})
printCountAndOne(db.autos, {'assembly': {"$exists": 0}})
printCountAndOne(db.autos, {"assembly": {"$regex": "Japan"}})
printCountAndOne(db.autos, {"productionYears": {"$in": [1986,1987,1988, 1992]}})
printCountAndOne(db.autos, {"productionYears": {"$all": [1986,1987,1988, 1992]}})
from random import randint
new_name = "test {}".format(randint(0,999))
print "new name is {}".format(new_name)
auto = db.autos.find_one({"productionYears": {"$in": [1986,1987,1988, 1992]}})
auto['name'] = new_name
db.autos.save(auto)
db.autos.find_one({"name": {"$eq": new_name}})
#if exists then update else insert
db.autos.update_one({"assembly": {"$regex": "Japan"}}, {"$set": {"test": "japanese"}})
print db.autos.count({"test": "japanese"})
#if exists then remove field else ignore
db.autos.update_one({"test": "japanese"}, {"$unset": {"test": ""}})
print db.autos.count({"test": "japanese"})
db.autos.update_many({"assembly": {"$regex": "Japan"}}, {"$set": {"test": "japanese"}})
print db.autos.count({"test": "japanese"})
#db.autos.drop()
query = {"test": "japanese"}
#print db.autos.delete_many(query)
print db.autos.count()
import codecs
import csv
import json
import pprint
import re
DATAFILE = 'arachnid.csv'
FIELDS ={'rdf-schema#label': 'label',
'URI': 'uri',
'rdf-schema#comment': 'description',
'synonym': 'synonym',
'name': 'name',
'family_label': 'family',
'class_label': 'class',
'phylum_label': 'phylum',
'order_label': 'order',
'kingdom_label': 'kingdom',
'genus_label': 'genus'}
def remove_brackets_part(string):
pattern = re.compile(r"\([^)]*?\)")
return pattern.sub("", string).strip()
def process_file(filename, fields):
process_fields = fields.values()
data = []
with open(filename, "r") as f:
reader = csv.DictReader(f)
for i in range(3):
l = reader.next()
for line in reader:
for key, value in fields.iteritems():
if key != value:
line[value] = line[key]
if value in line:
line[value] = line[value].strip()
line['label'] = remove_brackets_part(line['label'])
if ('name' not in line) or (line['name'] is None) or line['name'].isalnum():
line['name'] = line['label']
if 'synonym' in line:
if line['synonym'] == 'NULL':
line['synonym'] = None
else:
line['synonym'] = parse_array(line['synonym'])
for key in line.keys():
if (key not in line) or (line[key] == 'NULL'):
line[key] = None
data.append({'label': line['label'],
'uri': line['uri'],
'description': line['description'],
'name': line['name'],
'synonym': line['synonym'],
'classification': {
'family': line['family'],
'class': line['class'],
'phylum': line['phylum'],
'order': line['order'],
'kingdom': line['kingdom'],
'genus': line['genus']
}
})
return data
def parse_array(v):
if (v[0] == "{") and (v[-1] == "}"):
v = v.lstrip("{")
v = v.rstrip("}")
v_array = v.split("|")
v_array = [i.strip() for i in v_array]
return v_array
return [v]
def test():
data = process_file(DATAFILE, FIELDS)
print "Your first entry:"
pprint.pprint(data[0])
first_entry = {
"synonym": None,
"name": "Argiope",
"classification": {
"kingdom": "Animal",
"family": "Orb-weaver spider",
"order": "Spider",
"phylum": "Arthropod",
"genus": None,
"class": "Arachnid"
},
"uri": "http://dbpedia.org/resource/Argiope_(spider)",
"label": "Argiope",
"description": "The genus Argiope includes rather large and spectacular spiders that often have a strikingly coloured abdomen. These spiders are distributed throughout the world. Most countries in tropical or temperate climates host one or more species that are similar in appearance. The etymology of the name is from a Greek name meaning silver-faced."
}
assert len(data) == 76
assert data[0] == first_entry
assert data[17]["name"] == "Ogdenia"
assert data[48]["label"] == "Hydrachnidiae"
assert data[14]["synonym"] == ["Cyrene Peckham & Peckham"]
test()
db = get_db()
with open('arachnid.json') as f:
data = json.loads(f.read())
db.arachnid.insert(data, db)
DATAFILE = 'arachnid.csv'
FIELDS ={'rdf-schema#label': 'label',
'binomialAuthority_label': 'binomialAuthority'}
def add_field(filename, fields):
Complete this function to set up a dictionary for adding binomialAuthority
information to the database.
process_fields = fields.keys()
data = {}
with open(filename, "r") as f:
reader = csv.DictReader(f)
for i in range(3):
l = reader.next()
for line in reader:
binomialAuthority_label = line['binomialAuthority_label']
if binomialAuthority_label == 'NULL':
continue
label = remove_brackets_part(line['rdf-schema#label'])
data[label] = binomialAuthority_label
return data
def update_db(data, db):
Use the dictionary you generated from add_field to update the database.
for key, value in data.iteritems():
pprint.pprint("{}:{}".format(key, value))
db.arachnid.update_many({"label": {"$eq": key}}, {"$set": {"classification.binomialAuthority": value}})
def testFinal():
# Please change only the add_field and update_db functions!
# Changes done to this function will not be taken into account
# when doing a Test Run or Submit, they are just for your own reference
# and as an example for running this code locally!
data = add_field(DATAFILE, FIELDS)
from pymongo import MongoClient
client = MongoClient("mongodb://localhost:27017")
db = client.examples
update_db(data, db)
updated = db.arachnid.find_one({'label': 'Opisthoncana'})
assert updated['classification']['binomialAuthority'] == 'Embrik Strand'
pprint.pprint(data)
testFinal()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercice 0
Step2: Les opérateurs arithmétiques suivent des règles de précédence (priorité de l'exponentiation sur la multiplication et de la multiplication sur l'addition). Ces priorités peuvent être changées dans un calcul à l'aide de parenthèses.
Step3: Exercice 1
Step4: Les instructions élémentaires
Step5: Les principaux types de Python
Step6: Le type float pour les flottants (approximations de réels)
Step7: Le type bool pour les booléens (deux valeurs logiques possibles True ou False)
Step8: Le type str pour les chaînes de caractères
Step9: Le type list pour les listes (conteneurs de plusieurs valeurs)
Step10: Le type tuple pour les tuple (conteneurs de plusieurs valeurs comme les listes mais non modifiable une fois défini)
Step11: Les instructions d'entrée / sortie
Step12: Pour afficher du texte, celui-ci doit être contenu dans une chaîne de caratère délimitée par des apostrophes comme 'La mer' ou des guillemets comme "L'océan".
Step13: On peut aussi afficher la valeur d'une expression obtenue par évaluation d'une combinaison de littéraux (valeurs correspondant aux types de base du langage), d'opérateurs et de variables
Step14: On peut passer plusieurs arguments à la fonction print en les séparant par une virgule, print les affichera par concaténation en utilisant un espace vide par défaut comme séparateur.
Step15: Par défaut, print effectue un saut de ligne après chaque affichage mais on peut changer ce comportement par défaut en réglant le paramètre optionnel end=''.
Step16: On peut aussi changer le paramètre de séparation par défaut sep=''.
Step17: Pour afficher l'aide sur une fonction en python, on peut utiliser la documentation en ligne sur https
Step18: Exercice 3
Step19: La fonction input
Step20: Réponse à l'exercice 4
Step21: Le test
Step22: Exercice 7
Step23: Exercice 8
Step24: La boucle for ou boucle inconditionnelle
Step25: Exercice 10
Step26: Les fonctions
Step27: Exercice 12
Step28: L'opérateur + de concaténation
Step29: La fonction len permet d'obtenir la longueur du conteneur avec len(conteneur).
Step30: L'opérateur [] permet d'accéder à l'élément en position k dans le conteneur (chaîne ou liste) avec la syntaxe conteneur[k].
Step31: D'abord les index positifs
Step32: Voici ce qui se produit lorsqu'on essaie d'accéder à un index hors de la plage
Step33: Et maintenant les index négatifs
Step34: Fonctions spécifiques aux chaînes de caractères
Step35: Il existe beaucoup de fonctions spécifiques aux chaînes de caractères, avant de construire un outil qui existe déjà, il est conseillé de consulter la documentation en ligne dans la rubrique Built-in Types.
Step36: La sortie d'un élément se fait avec la fonction interne (ou méthode) pop avec la syntaxe liste.pop(k) où k est la position de l'élément à sortir, entre 0 et len(liste) - 1 qui est la valeur par défaut.
Step37: Des chaînes de caractères aux listes et réciproquement
Step38: Réciproquement la fonction join permet de recoller les chaînes de caractères séparées dans une liste champs selon un caractère de jointure sep passé en paramètre avec la syntaxe 'sep'.join(champs).
Step39: Exercice 13
Step40: Exercice 14
Step41: Fonctions de bibliothèques et le module turtle
Step42: Exercice 16
Step43: Exercice 17
Step44: La boucle while ou boucle conditionnelle
Step45: Exercice 20
|
<ASSISTANT_TASK:>
Python Code:
# ceci est un commentaire, l'interpréteur ne le lit même pas
# les commentaires sont destinés au lecteur humain du code source
code = 'sesame' #affectation de variable
rep = input('Entrez le code : ') #affectation et instruction d'entrée
if rep == code: # test et début d'un bloc d'instructions
print('Vous pouvez entrer.') # instruction si test vrai
else:
print('Code incorrect, désolé ce ne sera pas possible.') # instruction si test faux
# L'opérateur + réalise l'addition de deux entiers
1 + 2
# L'opérateur + réalise aussi l'addition de deux flottants
2.0 + 3.5
# L'opérateur * réalise la multiplication de deux entiers
3 * 2
# L'opérateur * réalise aussi la multiplication de deux flottants
2.0 * 3.14
# L'opérateur - réalise la soustraction de deux entiers ou de deux flottants
(10 - 4, 3.14 - 0.14) #on utilise des parenthèses pour créer un couple de valeurs
# L'opérateur ** réalise l'exponentiation d'un entier par un entier, idem pour les flottants
(2 ** 3, 2.0 ** 0.5)
# L'opérateur // retourne le quotient de la division euclidienne de deux entiers
15 // 4
# L'opérateur / retourne l'approximation du quotient par le flottant le plus proche
(15 / 4, 2/3)
# L'opérateur % retourne le reste de la division euclidienne de deux entiers
15 % 4
( 1 + 2 * 3 ** 2) / 2
1 + 2 * 3 ** 2 / 2
# Répondez à l'exercice 1 à partir de cette cellule de code
# Vous saisissez un calcul puis vous appuyez sur CTRL + Return pour obtenir le résultat
#pour insérer ensuite une nouvelle cellule de code en-dessous, appuyez sur b.
# Cellule de code pour répondre à l'exercice 2
a = 12
print(type(a))
b = 1/3
print(type(b))
c = (1 < 3)
c
print(type(c))
d = ( 1 == 2)
d
print(type(d))
e = 'Hello World'
print(e)
print(type(e))
L = [1, 2, 4]
print(L)
print(type(L))
coord = (1, 2)
print(coord)
print(type(coord))
print(Bonjour)
print('Bonjour')
n = 2000
print(2017 - n)
print('Vous avez ', 2017 - n, ' ans')
print('Ligne 1')
print('Ligne 2')
print('Colonne 1', end='|')
print('Colonne 2')
print('Colonne 1', 'Colonne 2', sep='|')
help(print)
# Répondre à l'exercice 3 dans cette cellule de code
date = input('Entrez votre date de naissance : ')
print('Vous avez ', 2017 - date, 'ans')
# Répondez à l'exercice 5 dans cette cellule de code
# Cellule pour exécuter le code de l'exercice 6
# Répondre à la première question de l'exercice 7 dans cette cellule de code
# Répondre à l'exercice 8 dans cette cellule de code
# Répondre à l'exercice 9 dans cette cellule de code
# Répondre à l'exercice 10 dans cette cellule de code
# Répondre à l'exercice 11 dans cette cellule de code.
chaine = 'Bonjour'
'b' in chaine
'B' in chaine
liste = [0, 7]
8 in liste
0 in liste
chaine = 'Bonjour'
liste = [0,7]
chaine2 = chaine + ' à tous.'
chaine2
liste2 = [0] + liste
liste2
len(liste2), len(chaine2)
chaine = 'Bonjour'
liste = [0,7]
chaine[0], chaine[1], chaine[len(chaine) - 1]
chaine[len(chaine)]
chaine[-1], chaine[-2], chaine[-len(chaine)]
chaine = 'Bonjour'
chaine.lower()
chaine.upper()
liste = [0,7]
liste.append(1)
print(liste)
liste = [9,3]
a = liste.pop()
print(liste, a)
liste.append(4)
print(liste)
b = liste.pop(0)
print(b, liste)
chaine = 'Frédéric,Junier,enseignant,mathématiques'
champs = chaine.split(',')
print(champs)
chaine2 = '*'.join(champs)
print(chaine2)
# Cellule de code pour répondre à l'exercice 13
# Cellule de code pour répondre à l'exercice 14
from turtle import *
speed(1) #parametrage de la vitesse de 1 lent à 10 rapide, 0 étant la vitesse la plus rapide
shape("turtle") #choix de la forme de la tortue
pencolor("red") #choix de la couleur du crayon
pensize(4) #épaisseur du crayon
penup() #lever le crayon
goto(0,0) # aller à la position (0,0)
setheading(0) #orientation de la tortue vers l'Est / 90 Nord / 180 Ouest / 270 Sud
pendown() #poser le crayon
for k in range(4):
forward(100) #avancer de 100 pixels
left(90) #tourner de 90 degrés
mainloop() #gestionnaire d’evenement, indispensable pour l’affichage et l’interactivite
# Cellule de code pour répondre à l'exercice 16
# cellule de code pour répondre à l'exercice 18
# Cellule de code pour répondre à l'exercice 19
# Cellule de code pour répondre à l'exercice 20
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
import tensorflow as tf
model = tf.keras.Model()
model.add(tf.keras.layers.Input((width, height, channels)))
model.add(tf.keras.layers.Lambda(lambda x: x / 255))
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same'))
model.add(tf.keras.layers.concatenate([u1, c2]))
model.add(tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'))
model.add(tf.keras.layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same'))
model.add(tf.keras.layers.concatenate([u2, c1], axis=3))
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'))
model.add(tf.keras.layers.Conv2D(len(labels), (1, 1), activation='softmax'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Attacker Specific Analysis
Step2: Attack
Step3: Q
Step4: Victim Specific Analysis
Step5: Shared Analysis
Step6: Q
Step7: Q
Step8: Q
Step9: Q
Step10: Q
Step11: Q
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from load_utils import *
from analysis_utils import compare_groups,get_genders
d = load_diffs()
df_events, df_blocked_user_text = load_block_events_and_users()
d['blocked'].groupby('user_text')['pred_aggression_score']\
.agg( {'aggressiveness': np.mean})\
.hist(bins = 50)
plt.xlabel('user level aggression score')
plt.ylabel('num users')
plt.title('')
# lets exclude anons
d['blocked'].query('not author_anon').groupby('user_text')['pred_aggression_score']\
.agg( {'aggressiveness': np.mean})\
.hist(bins = 50)
plt.xlabel('user level aggression score')
plt.ylabel('num users')
plt.title('')
# lets compare to non-blocked users
# NOTE: would be better to have taken a random sample of users
d['2015'].query('not author_anon').groupby('user_text')['pred_aggression_score']\
.agg( {'aggressiveness': np.mean})\
.hist(bins = 50)
plt.xlabel('user level aggression score')
plt.ylabel('num users')
plt.title('')
d['blocked'].groupby('user_text')['pred_recipient_score']\
.agg( {'aggressiveness': np.mean}).hist(bins = 30)
plt.xlabel('user level attack score')
plt.ylabel('num users')
plt.title('')
d['blocked'].query('not author_anon').groupby('user_text')['pred_recipient_score']\
.agg( {'aggressiveness': np.mean}).hist(bins = 30)
plt.xlabel('user level attack score')
plt.ylabel('num users')
plt.title('')
d['2015'].query('not author_anon').groupby('user_text')['pred_recipient_score']\
.agg( {'aggressiveness': np.mean})\
.hist(bins = 50)
plt.xlabel('user level attack score')
plt.ylabel('num users')
plt.title('')
# TODO
# TODO
o = (False, True)
x = 'author_anon'
compare_groups(d['sample'][:100000], x, order = o)
# don't count posts to own article
o = (False, True)
x = 'recipient_anon'
compare_groups(d['sample'][:100000].query('not own_page'), x, order = o)
x = 'own_page'
o = (False, True)
compare_groups(d['sample'][:100000], x, order = o)
x = 'own_page'
compare_groups(d['sample'][:100000], x, order = o, hue = 'author_anon')
d_gender = get_genders(d['sample'])
o = ('unknown: registered', 'male', 'female')
x = 'author_gender'
compare_groups(d_gender, x, order = o)
o = ('unknown: registered', 'male', 'female')
x = 'recipient_gender'
compare_groups(d_gender.query('not own_page'), x, order= o)
o = ('unknown: registered', 'male', 'female')
x = 'author_gender'
compare_groups(d_gender.query("not own_page and recipient_gender != 'unknown:anon'"), x, order = o, hue = 'recipient_gender')
thresholds = np.percentile(d['2015']['user_text'].value_counts(), np.arange(0, 100.01,0.5 ))
thresholds = sorted(set(thresholds.astype(int)))
bins = []
for i in range(len(thresholds)-1):
label = '%d-%d' % (thresholds[i], thresholds[i+1]-1)
rnge = range(thresholds[i], thresholds[i+1])
bins.append((label, rnge))
def map_count(x):
for label, rnge in bins:
if x in rnge:
return label
d_temp = d['2015'].query('not author_anon')\
.groupby('user_text')['pred_aggression_score']\
.agg( {'aggressiveness': np.mean, 'count': len})\
.assign(num_comment_range = lambda x: x['count'].apply(map_count))
o = [e[0] for e in bins]
sns.pointplot(x='num_comment_range', y= 'aggressiveness', data= d_temp, order = o)
# TODO: extend to attacks, use long term user data, repeat for victims
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So total dollar volume is $3.09$ million USD. Notice that this is equivalent to taking the dollar volume averaged price and multiplying by the number of shares traded over that bar.
Step2: Often in real datasets you will be given averaged or 'last traded' data rather than individual trades. With averaged data, some average is taken over a bar (time period). With last traded data, only the final observation from that bar is reported. It is important to know if the data is averaged, volume averaged, or simply last traded. All of these will need to be treated differently.
Step3: Notice that just plotting the volume gives us lots of gaps. This is because we are plotting data with time on the x-axis, and equities are only traded during part of the day. The rest of the day we have no data for volume. There is a small amount of overnight trading that occurs for equities, but it is often only useful as a data source as the liquidity is too low to trade. Quantopian does not currently have data on overnight trading, just for regular market hours.
Step4: Let's look at how this volume is balanced out over a single day.
Step5: This is pretty noisy, but with pandas we can easily take an average across all the days in our sample to get a sense of how volume fluctuates on average. Here is the average volume for a given minute over the whole time period.
Step6: High Equity Volume After Market Open
Step7: Broad Liquidity Considerations in Equities
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
num_shares = np.asarray([30000, 60000, 10000])
prices = np.asarray([30, 31, 33])
np.dot(num_shares, prices)
# Get the average trade price
print "Average trade price: %s" % (np.mean(prices))
# Get the volume weighted average
vw = np.dot(num_shares, prices) / float(np.sum(num_shares))
print "Volume weighted average trade price: %s" % (vw)
# Go back to volume
v = vw * np.sum(num_shares)
print "Volume: %s" % (v)
data = get_pricing('SPY', start_date='2015-6-1', end_date='2015-6-20', frequency='minute')
data.head()
plt.plot(data['volume'])
plt.ylabel('Volume (USD)');
plt.plot(data['volume']['2015-6-4'])
plt.ylabel('Volume (USD)');
avg_minute_volume = data.groupby([data.index.hour, data.index.minute]).mean()['volume']
avg_minute_volume.plot()
plt.ylabel('Volume (USD)');
start_date = '2016-04-01'
end_date = '2016-06-14'
aapl_volume = get_pricing('AAPL', start_date=start_date, end_date=end_date, fields='volume')
spy_future_volume = get_pricing('ESM16', start_date=start_date, end_date=end_date, fields='volume')
aapl_volume.plot()
spy_future_volume.plot()
plt.legend();
crude_future_volume = get_pricing('CLM16', start_date=start_date, end_date=end_date, fields='volume')
spy_future_volume.plot()
crude_future_volume.plot()
plt.legend();
palladium_future_volume = get_pricing('PAM16', start_date=start_date, end_date=end_date, fields='volume')
crude_future_volume.plot()
palladium_future_volume.plot()
plt.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Authenticate your GCP account
Step2: Create a Cloud Storage bucket
Step3: Only if your bucket doesn't already exist
Step4: Finally, validate access to your Cloud Storage bucket by examining its contents
Step5: Import libraries and define constants
Step6: Create a dataset
Step7: Cloud training
Step8: Local training snippet
Step9: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
REGION = 'us-central1' #@param {type:"string"}
! gsutil mb -l $REGION gs://$BUCKET_NAME
! gsutil ls -al gs://$BUCKET_NAME
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from sklearn import datasets
from sklearn.model_selection import train_test_split
import pandas as pd
import tensorflow as tf
import os
import time
from IPython.core.display import HTML
# load Iris dataset
iris = datasets.load_iris()
names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
data = pd.DataFrame(iris.data, columns=names)
# add target
data['target'] = iris.target
# split
training, validation = train_test_split(data, test_size=50, stratify=data['target'])
# standardization
training_targets = training.pop('target')
validation_targets = validation.pop('target')
data_mean = training.mean(axis=0)
data_std = training.std(axis=0)
training = (training - data_mean) / data_std
training['target'] = training_targets
validation = (validation - data_mean) / data_std
validation['target'] = validation_targets
print('Training data head')
display(training.head())
training_data = os.path.join('gs://', BUCKET_NAME, 'data/train.csv')
validation_data = os.path.join('gs://', BUCKET_NAME, 'data/valid.csv')
print('Copy the data in bucket ...')
with tf.io.gfile.GFile(training_data, 'w') as f:
training.to_csv(f, index=False)
with tf.io.gfile.GFile(validation_data, 'w') as f:
validation.to_csv(f, index=False)
output_location = os.path.join('gs://', BUCKET_NAME, 'output')
job_name = "knn_classification_{}".format(time.strftime("%Y%m%d%H%M%S"))
!gcloud ai-platform jobs submit training $job_name \
--master-image-uri gcr.io/aihub-c2t-containers/kfp-components/oob_algorithm/knn:latest \
--region $REGION \
--scale-tier CUSTOM \
--master-machine-type standard \
-- \
--output-location {output_location} \
--training-data {training_data} \
--validation-data {validation_data} \
--target-column target \
--data-type csv \
--k-neighbors 5 \
--fresh-start True \
--objective classification
if not tf.io.gfile.exists(os.path.join(output_location, 'report.html')):
raise RuntimeError('The file report.html was not found. Did the training job finish?')
with tf.io.gfile.GFile(os.path.join(output_location, 'report.html')) as f:
display(HTML(f.read()))
# If training job is still running, cancel it
! gcloud ai-platform jobs cancel $job_name --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fichiers
Step2: Le même programme mais écrit avec une écriture condensée
Step3: On regare les premières lignes du fichier mat2.txt
Step4: Lecture
Step5: On retrouve les mêmes informations à ceci près qu'il ne faut pas oublier de convertir les nombres initiaux en float.
Step6: Voilà qui est mieux. Le module os.path propose différentes fonctions pour manipuler les noms de fichiers. Le module os propose différentes fonctions pour manipuler les fichiers
Step7: with
Step8: L'instruction close ferme le fichier. A l'ouverture, le fichier est réservé par le programme Python, aucune autre application ne peut écrire dans le même fichier. Après l'instruction close, une autre application pour le supprimer, le modifier. Avec le mot clé with, la méthode close est implicitement appelée.
Step9: Exercice 2
Step11: La cellule suivante vous permet d'enregistrer le contenu de la cellule précédente dans un fichier appelée monmodule.py.
Step12: Le second fichier
Step13: Note
Step14: Pour retirer le module de la mémoire, il faut l'enlever de sys.modules avec l'instruction del sys.modules['monmodule']. Python aura l'impression que le module monmodule.py est nouveau et il l'importera à nouveau.
Step16: La documentation pour les expressions régulières est ici
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
mat = [[1.0, 0.0],[0.0,1.0] ] # matrice de type liste de listes
with open ("mat.txt", "w") as f : # création d'un fichier en mode écriture
for i in range (0,len (mat)) : #
for j in range (0, len (mat [i])) : #
s = str (mat [i][j]) # conversion en chaîne de caractères
f.write (s + "\t") #
f.write ("\n") #
# on vérifie que le fichier existe :
import os
print([ _ for _ in os.listdir(".") if "mat" in _ ] )
# la ligne précédente utilise le symbole _ : c'est une variable
# le caractère _ est une lettre comme une autre
# on pourrait écrire :
# print([ fichier for fichier in os.listdir(".") if "mat" in fichier ] )
# on utilise cette convention pour dire que cette variable n'a pas vocation à rester
mat = [[1.0, 0.0],[0.0,1.0] ] # matrice de type liste de listes
with open ("mat.txt", "w") as f : # création d'un fichier
s = '\n'.join ( '\t'.join( str(x) for x in row ) for row in mat )
f.write ( s )
# on vérifie que le fichier existe :
print([ _ for _ in os.listdir(".") if "mat" in _ ] )
import pyensae
%load_ext pyensae
%head mat.txt
with open ("mat.txt", "r") as f : # ouverture d'un fichier
mat = [ row.strip(' \n').split('\t') for row in f.readlines() ]
print(mat)
with open ("mat.txt", "r") as f : # ouverture d'un fichier
mat = [ [ float(x) for x in row.strip(' \n').split('\t') ] for row in f.readlines() ]
print(mat)
import os
for f in os.listdir('.'):
print (f)
with open("exemple_fichier.txt", "w") as f:
f.write("something")
f = open("exemple_fichier.txt", "w")
f.write("something")
f.close()
import math
print (math.cos(1))
from math import cos
print (cos(1))
from math import * # cette syntaxe est déconseillée car il est possible qu'une fonction
print (cos(1)) # porte le même nom qu'une des vôtres
# fichier monmodule.py
import math
def fonction_cos_sequence(seq) :
return [ math.cos(x) for x in seq ]
if __name__ == "__main__" :
print ("ce message n'apparaît que si ce programme est le point d'entrée")
code =
# -*- coding: utf-8 -*-
import math
def fonction_cos_sequence(seq) :
return [ math.cos(x) for x in seq ]
if __name__ == "__main__" :
print ("ce message n'apparaît que si ce programme est le point d'entrée")
with open("monmodule.py", "w", encoding="utf8") as f :
f.write(code)
import monmodule
print ( monmodule.fonction_cos_sequence ( [ 1, 2, 3 ] ) )
import sys
list(sorted(sys.modules))[:10]
import pyensae.datasource
discours = pyensae.datasource.download_data('voeux.zip', website = 'xd')
import re # les expressions régulières sont accessibles via le module re
expression = re.compile("[0-9]{2}/[0-9]{2}/[0-9]{4}")
texte = Je suis né le 28/12/1903 et je suis mort le 08/02/1957. Ma seconde femme est morte le 10/11/63.
cherche = expression.findall(texte)
print(cherche)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The above image is from
Step2: The stack of feature maps output are the input to the next layer.
Step3: Gradient-Based Learning Applied to Document Recognition, Proceedings of the IEEE, 86(11)
Step4: In CNTK
Step5: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU).
Step6: Now that we imported the needed modules, let's implement our first CNN, as shown in Figure 5 above.
Step7: To train the above model we need two things
Step8: Now let us write the the training and validation loop.
Step9: Although, this model is very simple, it still has too much code, we can do better. Here the same model in more terse format
Step10: Now that we have a trained model, let us classify the following image of a truck. We use PIL to read the image.
Step11: During training we have subtracted the mean from the input images. Here we take an approximate value of the mean and subtract it from the image.
Step12: Adding dropout layer, with drop rate of 0.25, before the last dense layer
Step13: Add batch normalization after each convolution and before the last dense layer
Step14: Let's implement an inspired VGG style network, using layer API, here the architecture
Step15: Residual Network (ResNet)
Step16: The idea of the above block is 2 folds
Step17: Let's write the full model
|
<ASSISTANT_TASK:>
Python Code:
# Figure 1
Image(url="https://cntk.ai/jup/201/cifar-10.png", width=500, height=500)
# Figure 2
Image(url="https://cntk.ai/jup/201/Conv2D.png")
# Figure 3
Image(url="https://cntk.ai/jup/201/Conv2DFeatures.png")
# Figure 4
Image(url="https://cntk.ai/jup/201/MaxPooling.png", width=400, height=400)
from __future__ import print_function # Use a function definition from future version (say 3.x from 2.7 interpreter)
import matplotlib.pyplot as plt
import math
import numpy as np
import os
import PIL
import sys
try:
from urllib.request import urlopen
except ImportError:
from urllib import urlopen
import cntk as C
if 'TEST_DEVICE' in os.environ:
if os.environ['TEST_DEVICE'] == 'cpu':
C.device.try_set_default_device(C.device.cpu())
else:
C.device.try_set_default_device(C.device.gpu(0))
# Figure 5
Image(url="https://cntk.ai/jup/201/CNN.png")
def create_basic_model(input, out_dims):
with C.layers.default_options(init=C.glorot_uniform(), activation=C.relu):
net = C.layers.Convolution((5,5), 32, pad=True)(input)
net = C.layers.MaxPooling((3,3), strides=(2,2))(net)
net = C.layers.Convolution((5,5), 32, pad=True)(net)
net = C.layers.MaxPooling((3,3), strides=(2,2))(net)
net = C.layers.Convolution((5,5), 64, pad=True)(net)
net = C.layers.MaxPooling((3,3), strides=(2,2))(net)
net = C.layers.Dense(64)(net)
net = C.layers.Dense(out_dims, activation=None)(net)
return net
# Determine the data path for testing
# Check for an environment variable defined in CNTK's test infrastructure
#envvar = 'CNTK_EXTERNAL_TESTDATA_SOURCE_DIRECTORY'
#def is_test(): return envvar in os.environ
#if is_test():
# data_path = os.path.join(os.environ[envvar],'Image','CIFAR','v0','tutorial201')
# data_path = os.path.normpath(data_path)
#else:
# data_path = os.path.join('data', 'CIFAR-10')
data_path = 'C:\\Users\\hojohnl\\Source\\Repos\\CNTK\\Examples\\Image\\DataSets\\CIFAR-10'
# model dimensions
image_height = 32
image_width = 32
num_channels = 3
num_classes = 10
import cntk.io.transforms as xforms
#
# Define the reader for both training and evaluation action.
#
def create_reader(map_file, mean_file, train):
print("Reading map file:", map_file)
print("Reading mean file:", mean_file)
if not os.path.exists(map_file) or not os.path.exists(mean_file):
raise RuntimeError("This tutorials depends 201A tutorials, please run 201A first.")
# transformation pipeline for the features has jitter/crop only when training
transforms = []
# train uses data augmentation (translation only)
if train:
transforms += [
xforms.crop(crop_type='randomside', side_ratio=0.8)
]
transforms += [
xforms.scale(width=image_width, height=image_height, channels=num_channels, interpolations='linear'),
xforms.mean(mean_file)
]
# deserializer
return C.io.MinibatchSource(C.io.ImageDeserializer(map_file, C.io.StreamDefs(
features = C.io.StreamDef(field='image', transforms=transforms), # first column in map file is referred to as 'image'
labels = C.io.StreamDef(field='label', shape=num_classes) # and second as 'label'
)))
# Create the train and test readers
print(data_path)
reader_train = create_reader(os.path.join(data_path, 'train_map.txt'),
os.path.join(data_path, 'CIFAR-10_mean.xml'), True)
reader_test = create_reader(os.path.join(data_path, 'test_map.txt'),
os.path.join(data_path, 'CIFAR-10_mean.xml'), False)
#
# Train and evaluate the network.
#
def train_and_evaluate(reader_train, reader_test, max_epochs, model_func):
# Input variables denoting the features and label data
input_var = C.input_variable((num_channels, image_height, image_width))
label_var = C.input_variable((num_classes))
# Normalize the input
feature_scale = 1.0 / 256.0
input_var_norm = C.element_times(feature_scale, input_var)
# apply model to input
z = model_func(input_var_norm, out_dims=10)
#
# Training action
#
# loss and metric
ce = C.cross_entropy_with_softmax(z, label_var)
pe = C.classification_error(z, label_var)
# training config
epoch_size = 50000
minibatch_size = 64
# Set training parameters
lr_per_minibatch = C.learning_rate_schedule([0.01]*10 + [0.003]*10 + [0.001],
C.UnitType.minibatch, epoch_size)
momentum_time_constant = C.momentum_as_time_constant_schedule(-minibatch_size/np.log(0.9))
l2_reg_weight = 0.001
# trainer object
learner = C.momentum_sgd(z.parameters,
lr = lr_per_minibatch,
momentum = momentum_time_constant,
l2_regularization_weight=l2_reg_weight)
progress_printer = C.logging.ProgressPrinter(tag='Training', num_epochs=max_epochs)
trainer = C.Trainer(z, (ce, pe), [learner], [progress_printer])
# define mapping from reader streams to network inputs
input_map = {
input_var: reader_train.streams.features,
label_var: reader_train.streams.labels
}
C.logging.log_number_of_parameters(z) ; print()
# perform model training
batch_index = 0
plot_data = {'batchindex':[], 'loss':[], 'error':[]}
for epoch in range(max_epochs): # loop over epochs
sample_count = 0
while sample_count < epoch_size: # loop over minibatches in the epoch
data = reader_train.next_minibatch(min(minibatch_size, epoch_size - sample_count),
input_map=input_map) # fetch minibatch.
trainer.train_minibatch(data) # update model with it
sample_count += data[label_var].num_samples # count samples processed so far
# For visualization...
plot_data['batchindex'].append(batch_index)
plot_data['loss'].append(trainer.previous_minibatch_loss_average)
plot_data['error'].append(trainer.previous_minibatch_evaluation_average)
batch_index += 1
trainer.summarize_training_progress()
#
# Evaluation action
#
epoch_size = 10000
minibatch_size = 16
# process minibatches and evaluate the model
metric_numer = 0
metric_denom = 0
sample_count = 0
minibatch_index = 0
while sample_count < epoch_size:
current_minibatch = min(minibatch_size, epoch_size - sample_count)
# Fetch next test min batch.
data = reader_test.next_minibatch(current_minibatch, input_map=input_map)
# minibatch data to be trained with
metric_numer += trainer.test_minibatch(data) * current_minibatch
metric_denom += current_minibatch
# Keep track of the number of samples processed so far.
sample_count += data[label_var].num_samples
minibatch_index += 1
print("")
print("Final Results: Minibatch[1-{}]: errs = {:0.1f}% * {}".format(minibatch_index+1, (metric_numer*100.0)/metric_denom, metric_denom))
print("")
# Visualize training result:
window_width = 32
loss_cumsum = np.cumsum(np.insert(plot_data['loss'], 0, 0))
error_cumsum = np.cumsum(np.insert(plot_data['error'], 0, 0))
# Moving average.
plot_data['batchindex'] = np.insert(plot_data['batchindex'], 0, 0)[window_width:]
plot_data['avg_loss'] = (loss_cumsum[window_width:] - loss_cumsum[:-window_width]) / window_width
plot_data['avg_error'] = (error_cumsum[window_width:] - error_cumsum[:-window_width]) / window_width
plt.figure(1)
plt.subplot(211)
plt.plot(plot_data["batchindex"], plot_data["avg_loss"], 'b--')
plt.xlabel('Minibatch number')
plt.ylabel('Loss')
plt.title('Minibatch run vs. Training loss ')
plt.show()
plt.subplot(212)
plt.plot(plot_data["batchindex"], plot_data["avg_error"], 'r--')
plt.xlabel('Minibatch number')
plt.ylabel('Label Prediction Error')
plt.title('Minibatch run vs. Label Prediction Error ')
plt.show()
return C.softmax(z)
pred = train_and_evaluate(reader_train,
reader_test,
max_epochs=5,
model_func=create_basic_model)
def create_basic_model_terse(input, out_dims):
with C.layers.default_options(init=C.glorot_uniform(), activation=C.relu):
model = C.layers.Sequential([
C.layers.For(range(3), lambda i: [
C.layers.Convolution((5,5), [32,32,64][i], pad=True),
C.layers.MaxPooling((3,3), strides=(2,2))
]),
C.layers.Dense(64),
C.layers.Dense(out_dims, activation=None)
])
return model(input)
pred_basic_model = train_and_evaluate(reader_train,
reader_test,
max_epochs=10,
model_func=create_basic_model_terse)
# Figure 6
Image(url="https://cntk.ai/jup/201/00014.png", width=64, height=64)
import PIL
# Download a sample image
# (this is 00014.png from test dataset)
# Any image of size 32,32 can be evaluated
url = "https://cntk.ai/jup/201/00014.png"
myimg = np.array(PIL.Image.open(urlopen(url)), dtype=np.float32)
def eval(pred_op, image_data):
label_lookup = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
image_mean = 133.0
image_data -= image_mean
image_data = np.ascontiguousarray(np.transpose(image_data, (2, 0, 1)))
result = np.squeeze(pred_op.eval({pred_op.arguments[0]:[image_data]}))
# Return top 3 results:
top_count = 3
result_indices = (-np.array(result)).argsort()[:top_count]
print("Top 3 predictions:")
for i in range(top_count):
print("\tLabel: {:10s}, confidence: {:.2f}%".format(label_lookup[result_indices[i]], result[result_indices[i]] * 100))
# Run the evaluation on the downloaded image
eval(pred_basic_model, myimg)
from cntk.logging.graph import get_node_outputs
node_outputs = get_node_outputs(pred_basic_model)
node_string_output = str(node_outputs)
print(node_string_output[0:1024])
len(node_outputs)
def create_basic_model_with_dropout(input, out_dims):
with C.layers.default_options(activation=C.relu, init=C.glorot_uniform()):
model = C.layers.Sequential([
C.layers.For(range(3), lambda i: [
C.layers.Convolution((5,5), [32,32,64][i], pad=True),
C.layers.MaxPooling((3,3), strides=(2,2))
]),
C.layers.Dense(64),
C.layers.Dropout(0.25),
C.layers.Dense(out_dims, activation=None)
])
return model(input)
pred_basic_model_dropout = train_and_evaluate(reader_train,
reader_test,
max_epochs=5,
model_func=create_basic_model_with_dropout)
def create_basic_model_with_batch_normalization(input, out_dims):
with C.layers.default_options(activation=C.relu, init=C.glorot_uniform()):
model = C.layers.Sequential([
C.layers.For(range(3), lambda i: [
C.layers.Convolution((5,5), [32,32,64][i], pad=True),
C.layers.BatchNormalization(map_rank=1),
C.layers.MaxPooling((3,3), strides=(2,2))
]),
C.layers.Dense(64),
C.layers.BatchNormalization(map_rank=1),
C.layers.Dense(out_dims, activation=None)
])
return model(input)
pred_basic_model_bn = train_and_evaluate(reader_train,
reader_test,
max_epochs=5,
model_func=create_basic_model_with_batch_normalization)
eval(pred_basic_model_bn, myimg)
def create_vgg9_model(input, out_dims):
with C.layers.default_options(activation=C.relu, init=C.glorot_uniform()):
model = C.layers.Sequential([
C.layers.For(range(3), lambda i: [
C.layers.Convolution((3,3), [64,96,128][i], pad=True),
C.layers.Convolution((3,3), [64,96,128][i], pad=True),
C.layers.MaxPooling((3,3), strides=(2,2))
]),
C.layers.For(range(2), lambda : [
C.layers.Dense(1024)
]),
C.layers.Dense(out_dims, activation=None)
])
return model(input)
pred_vgg = train_and_evaluate(reader_train,
reader_test,
max_epochs=5,
model_func=create_vgg9_model)
eval(pred_vgg, myimg)
# Figure 7
Image(url="https://cntk.ai/jup/201/ResNetBlock2.png")
def convolution_bn(input, filter_size, num_filters, strides=(1,1), init=C.he_normal(), activation=C.relu):
if activation is None:
activation = lambda x: x
r = C.layers.Convolution(filter_size,
num_filters,
strides=strides,
init=init,
activation=None,
pad=True, bias=False)(input)
r = C.layers.BatchNormalization(map_rank=1)(r)
r = activation(r)
return r
def resnet_basic(input, num_filters):
c1 = convolution_bn(input, (3,3), num_filters)
c2 = convolution_bn(c1, (3,3), num_filters, activation=None)
p = c2 + input
return C.relu(p)
def resnet_basic_inc(input, num_filters):
c1 = convolution_bn(input, (3,3), num_filters, strides=(2,2))
c2 = convolution_bn(c1, (3,3), num_filters, activation=None)
s = convolution_bn(input, (1,1), num_filters, strides=(2,2), activation=None)
p = c2 + s
return C.relu(p)
def resnet_basic_stack(input, num_filters, num_stack):
assert (num_stack > 0)
r = input
for _ in range(num_stack):
r = resnet_basic(r, num_filters)
return r
def create_resnet_model(input, out_dims):
conv = convolution_bn(input, (3,3), 16)
r1_1 = resnet_basic_stack(conv, 16, 3)
r2_1 = resnet_basic_inc(r1_1, 32)
r2_2 = resnet_basic_stack(r2_1, 32, 2)
r3_1 = resnet_basic_inc(r2_2, 64)
r3_2 = resnet_basic_stack(r3_1, 64, 2)
# Global average pooling
pool = C.layers.AveragePooling(filter_shape=(8,8), strides=(1,1))(r3_2)
net = C.layers.Dense(out_dims, init=C.he_normal(), activation=None)(pool)
return net
pred_resnet = train_and_evaluate(reader_train, reader_test, max_epochs=10, model_func=create_resnet_model)
eval(pred_resnet, myimg)
eval(pred_resnet, myimg)
save('cifar10-resnet.model')
pred_resnet.save('cifar10-resnet.model')
m = C.load('cifar10-resnet.model')
eval(myimg)
eval(image_data=myimg)
r = block_root
from __future__ import print_function
import numpy as np
import cntk as C
from cntk.learners import sgd, learning_rate_schedule, UnitType
from cntk.logging import ProgressPrinter
from cntk.layers import Dense, Sequential
def generate_random_data(sample_size, feature_dim, num_classes):
# Create synthetic data using NumPy.
Y = np.random.randint(size=(sample_size, 1), low=0, high=num_classes)
# Make sure that the data is separable
X = (np.random.randn(sample_size, feature_dim) + 3) * (Y + 1)
X = X.astype(np.float32)
# converting class 0 into the vector "1 0 0",
# class 1 into vector "0 1 0", ...
class_ind = [Y == class_number for class_number in range(num_classes)]
Y = np.asarray(np.hstack(class_ind), dtype=np.float32)
return X, Y
def ffnet():
inputs = 2
outputs = 2
layers = 2
hidden_dimension = 50
# input variables denoting the features and label data
features = C.input_variable((inputs), np.float32)
label = C.input_variable((outputs), np.float32)
# Instantiate the feedforward classification model
my_model = Sequential ([
Dense(hidden_dimension, activation=C.sigmoid),
Dense(outputs)])
z = my_model(features)
ce = C.cross_entropy_with_softmax(z, label)
pe = C.classification_error(z, label)
# Instantiate the trainer object to drive the model training
lr_per_minibatch = learning_rate_schedule(0.125, UnitType.minibatch)
progress_printer = ProgressPrinter(0)
trainer = C.Trainer(z, (ce, pe), [sgd(z.parameters, lr=lr_per_minibatch)], [progress_printer])
# Get minibatches of training data and perform model training
minibatch_size = 25
num_minibatches_to_train = 1024
aggregate_loss = 0.0
for i in range(num_minibatches_to_train):
train_features, labels = generate_random_data(minibatch_size, inputs, outputs)
# Specify the mapping of input variables in the model to actual minibatch data to be trained with
trainer.train_minibatch({features : train_features, label : labels})
sample_count = trainer.previous_minibatch_sample_count
aggregate_loss += trainer.previous_minibatch_loss_average * sample_count
last_avg_error = aggregate_loss / trainer.total_number_of_samples_seen
test_features, test_labels = generate_random_data(minibatch_size, inputs, outputs)
avg_error = trainer.test_minibatch({features : test_features, label : test_labels})
print(' error rate on an unseen minibatch: {}'.format(avg_error))
return last_avg_error, avg_error
np.random.seed(98052)
ffnet()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Classification
Step2: http
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import tree
X = [[0, 0], [1, 1]]
Y = [0, 1]
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X, Y)
clf.predict([[2., 2.]])
clf.predict_proba([[2., 2.]])
X = [[0, 0], [2, 2]]
y = [0.5, 2.5]
clf = tree.DecisionTreeRegressor()
clf = clf.fit(X, y)
clf.predict([[1, 1]])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear modeling, subsampling the negative set ~20 times
Step2: Charge can predict TAD with AUC=0.88 <br> aminoacid composition with AUC=0.93 (which includes the charges and the information from hydrophobic and aromatic residues)<br>
Step3: Flip the coefficients and test performance to assign importance of dipeptide 'DW'. How does that affect performance?
Step4: What if we flip the whole heatmap on the diagonal line (lower left to upper right)?
Step5: And shuffling?
Step6: Try flipping weights of every other dipeptides, one at a time and compare?
Step7: Try assigning a zero to each coefficient.
Step8: Conclutions from dipeptide segment
Step9: xgboost with single amino acid frequencies
|
<ASSISTANT_TASK:>
Python Code:
# create one numpy_map array for positives and 12 for negatives
idx = positives_train
p = get_aa_frequencies(positives[idx,0])
p_train, p_filename = store_data_numpy(np.hstack(p).T, float)
# set the positive validation array
idx = positives_validation
p_valid = get_aa_frequencies(positives[idx,0])
p_valid = np.hstack(p_valid).T
# negatives. SQL indexes start with 1 and not 0
N = divisors[-1]
idxs = np.array(negatives_train)
idxs = np.vstack(np.split(idxs, N))
n_filenames = np.empty(N, dtype='O')
n_train_shape = tuple(np.insert(idxs.shape, 2, 20))
n_train = np.zeros(shape=n_train_shape, dtype=np.float)
for i in range(N):
n = get_aa_frequencies(negatives[idxs[i],0])
n_train[i], n_filenames[i] = store_data_numpy(np.hstack(n).T, float)
# set the negative validation array
idx = negatives_validation
n_valid = get_aa_frequencies(negatives[idx,0])
n_valid = np.hstack(n_valid).T
# set a proper validation set with negatives and positives
X_valid = np.vstack([n_valid, p_valid])
y_valid = np.hstack([np.zeros(n_valid.shape[0]), np.ones(p_valid.shape[0])])
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import roc_auc_score, make_scorer
aminoacid_frequencies = []
for i in range(20):
# subsample the negative set and join with positive set
negative_sample = np.concatenate(subsample_negatives(n_train, p_train.shape[0]))
positive_sample = p_train
X = np.vstack([negative_sample, positive_sample])
y = np.hstack([np.zeros(negative_sample.shape[0]), np.ones(positive_sample.shape[0])])
model = LogisticRegressionCV(Cs=np.linspace(1e-4, 1e4, 50), scoring=make_scorer(roc_auc_score)).fit(X,y)
performance = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
aminoacid_frequencies.append(performance)
print('amioacid frequencies\nauc = {:.2f} +- {:.5f}'.format(np.mean(aminoacid_frequencies), np.std(aminoacid_frequencies)))
cols = ['b']*3 + ['r']*2 + ['g']*12 + ['y']*3
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.title('coefficients', fontsize=16)
plt.bar(aa, model.coef_[0], color=cols)
plt.grid(axis='y')
# frequencies
n = np.where(y==0)
p = np.where(y==1)
freqs = np.mean(X[p], axis=0) / np.mean(X[n], axis=0)
freqs = ((freqs - np.min(freqs)) / (np.max(freqs) - np.min(freqs))) - 0.5
plt.subplot(1,2,2)
plt.title('frequencies', fontsize=16)
plt.bar(aa, freqs, color=cols)
plt.grid();
# test all other physical properties
other_props = []
for prop in physical_props.columns:
#X_valid_prop = np.dot(X_valid, physical_props[prop].values).reshape(-1,1)
tmp = []
for i in range(20):
# subsample the negative set and join with positive set
negative_sample = np.concatenate(subsample_negatives(n_train, p_train.shape[0]))
positive_sample = p_train
X = np.vstack([negative_sample, positive_sample])
y = np.hstack([np.zeros(negative_sample.shape[0]), np.ones(positive_sample.shape[0])])
# there is a problem with the index of physical_properties
Pidx = [i+' ' for i in aa]
X = np.dot(X, physical_props.loc[Pidx, prop].values)
performance = 1 - roc_auc_score(y, X)
tmp.append(performance)
other_props.append(tmp)
# print results
pd.DataFrame(np.array(other_props).mean(axis=1), index = physical_props.columns, columns=['roc_auc'])
# remove the temporary numpy files of aminoacid frequencies
import subprocess
for i in n_filenames + p_filename:
subprocess.call(["rm",i])
# create one numpy_map array for positives and 12 for negatives
idx = positives_train
p = get_dipeptide_frequencies(positives[idx,0])
p_train, p_filename = store_data_numpy(np.vstack(p), float)
# set the positive validation array
idx = positives_validation
p_valid = get_dipeptide_frequencies(positives[idx,0])
p_valid = np.vstack(p_valid)
# negatives. SQL indexes start with 1 and not 0
N = divisors[-1]
idxs = np.array(negatives_train)
idxs = np.vstack(np.split(idxs, N))
n_filenames = np.empty(N, dtype='O')
n_train_shape = tuple(np.insert(idxs.shape, 2, 400))
n_train = np.zeros(shape=n_train_shape, dtype=np.float)
for i in range(N):
n = get_dipeptide_frequencies(negatives[idxs[i],0])
n_train[i], n_filenames[i] = store_data_numpy(np.vstack(n), float)
# set the negative validation array
idx = negatives_validation
n_valid = get_dipeptide_frequencies(negatives[idx,0])
n_valid = np.vstack(n_valid)
# set a proper validation set with negatives and positives
X_valid = np.vstack([n_valid, p_valid])
y_valid = np.hstack([np.zeros(n_valid.shape[0]), np.ones(p_valid.shape[0])])
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import roc_auc_score, make_scorer
dipeptide_frequencies, coefficients = [], []
for i in range(20):
# subsample the negative set and join with positive set
negative_sample = np.concatenate(subsample_negatives(n_train, p_train.shape[0]))
positive_sample = p_train
X = np.vstack([negative_sample, positive_sample])
y = np.hstack([np.zeros(negative_sample.shape[0]), np.ones(positive_sample.shape[0])])
model = LogisticRegressionCV(Cs=np.linspace(1e-4, 1e4, 50), scoring=make_scorer(roc_auc_score)).fit(X,y)
performance = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
dipeptide_frequencies.append(performance)
coefficients.append(model.coef_[0])
# Summarize dipeptides linear model
x = np.mean(dipeptide_frequencies)
s = np.std(dipeptide_frequencies)
print('{:.4f} +- {:.4f}'.format(x,s))
# I In case this cell is run more than once, set coefficient to original values
model.coef_[0] = coefficients[0]
# get performance with correct weights.
fwd_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# plot coefficients
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,5))
# correct orientation
im = ax1.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax1.set_xticks(np.arange(20)+0.5)
ax1.set_yticks(np.arange(20)+0.5)
ax1.set_xticklabels(aa)
ax1.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax1)
# flip coefficients
reverse = lambda dipept: ''.join([i for i in reversed(dipept)]) # reverse dipeptide fx
fwd = ['DW','EW','DL','EL','DF','EF','DI','FI']
rev = [reverse(i) for i in fwd]
fwd = [np.where(dipeptides==i)[0][0] for i in fwd]
rev = [np.where(dipeptides==i)[0][0] for i in rev]
coef_fwd = [model.coef_[0][i] for i in fwd]
coef_rev = [model.coef_[0][i] for i in rev]
# perform the flipping
for i,j,k,l in zip(fwd, coef_rev, rev, coef_fwd):
model.coef_[0][i] = j
model.coef_[0][k] = l
# plot the flipped orientation
im = ax2.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax2.set_xticks(np.arange(20)+0.5)
ax2.set_yticks(np.arange(20)+0.5)
ax2.set_xticklabels(aa)
ax2.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax2);
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
print('dipeptides model auc = {:.3f}\nflipped model auc = {:.3f}\ndecrease in performance= {:.1f}%'.format(
fwd_weights, rev_weights, (1-(rev_weights/fwd_weights))*200))
# I In case this cell is run more than once, set coefficient to original values
model.coef_[0] = coefficients[0]
# get performance with correct weights.
fwd_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# plot coefficients
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,5))
# correct orientation
im = ax1.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax1.set_xticks(np.arange(20)+0.5)
ax1.set_yticks(np.arange(20)+0.5)
ax1.set_xticklabels(aa)
ax1.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax1)
# flip coefficients
reverse = lambda dipept: ''.join([i for i in reversed(dipept)]) # reverse dipeptide fx
fwd = dipeptides
rev = [reverse(i) for i in fwd]
fwd = [np.where(dipeptides==i)[0][0] for i in fwd]
rev = [np.where(dipeptides==i)[0][0] for i in rev]
coef_fwd = [model.coef_[0][i] for i in fwd]
coef_rev = [model.coef_[0][i] for i in rev]
# perform the flipping
for i,j,k,l in zip(fwd, coef_rev, rev, coef_fwd):
model.coef_[0][i] = j
model.coef_[0][k] = l
# plot the flipped orientation
im = ax2.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax2.set_xticks(np.arange(20)+0.5)
ax2.set_yticks(np.arange(20)+0.5)
ax2.set_xticklabels(aa)
ax2.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax2);
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
print('dipeptides model auc = {:.3f}\nflipped model auc = {:.3f}\ndecrease in performance= {:.1f}%'.format(
fwd_weights, rev_weights, (1-(rev_weights/fwd_weights))*200))
# I In case this cell is run more than once, set coefficient to original values
model.coef_[0] = coefficients[0]
# get performance with correct weights.
fwd_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# plot coefficients
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,5))
# correct orientation
im = ax1.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax1.set_xticks(np.arange(20)+0.5)
ax1.set_yticks(np.arange(20)+0.5)
ax1.set_xticklabels(aa)
ax1.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax1)
# shuffle coefficients
np.random.shuffle(model.coef_[0])
# plot the flipped orientation
im = ax2.pcolor(model.coef_[0].reshape(20,20), cmap='hot_r')
ax2.set_xticks(np.arange(20)+0.5)
ax2.set_yticks(np.arange(20)+0.5)
ax2.set_xticklabels(aa)
ax2.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax2);
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
print('dipeptides model auc = {:.3f}\nflipped model auc = {:.3f}\ndecrease in performance= {:.1f}%'.format(
fwd_weights, rev_weights, (1-(rev_weights/fwd_weights))*200))
# and store the impact in performance in this list
flipping_performances = []
for idx in dipeptides:
# I In case this cell is run more than once, set coefficient to original values
model.coef_[0] = coefficients[0]
# get performance with correct weights.
fwd_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# flip coefficients
reverse = lambda dipept: ''.join([i for i in reversed(dipept)]) # reverse dipeptide fx
fwd = [idx]
rev = [reverse(i) for i in fwd]
fwd = [np.where(dipeptides==i)[0][0] for i in fwd]
rev = [np.where(dipeptides==i)[0][0] for i in rev]
coef_fwd = [model.coef_[0][i] for i in fwd]
coef_rev = [model.coef_[0][i] for i in rev]
# perform the flipping
for i,j,k,l in zip(fwd, coef_rev, rev, coef_fwd):
model.coef_[0][i] = j
model.coef_[0][k] = l
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
flipping_performances.append(rev_weights / fwd_weights)
# convert flipping performances into performance losses
flipping_performances = (1-np.array(flipping_performances))*200
# plot coefficients
f, ax1 = plt.subplots(1, figsize=(6,5))
# correct orientation
im = ax1.pcolor(np.array(flipping_performances).reshape(20,20), cmap='hot_r')
ax1.set_xticks(np.arange(20)+0.5)
ax1.set_yticks(np.arange(20)+0.5)
ax1.set_xticklabels(aa)
ax1.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax1)
ax1.set_title('Impact of fliping the dipeptide on\nperformance of linear model', fontsize=16);
# and store the impact in performance in this list
nulling_performances = []
for idx in range(len(dipeptides)):
# I In case this cell is run more than once, set coefficient to original values
model.coef_[0] = coefficients[0]
# get performance with correct weights.
fwd_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
# zeroing the coefficient
model.coef_[0][idx] = 0
# get performance using the flipped orientation
rev_weights = 1 - roc_auc_score(y_valid, model.predict_proba(X_valid)[:,0])
nulling_performances.append(rev_weights / fwd_weights)
# convert data into performece losses
nulling_performances = (1 - np.array(nulling_performances))*200
# plot coefficients
f, ax1 = plt.subplots(1, figsize=(6,5))
# correct orientation
im = ax1.pcolor(np.array(nulling_performances).reshape(20,20), cmap='hot_r')
ax1.set_xticks(np.arange(20)+0.5)
ax1.set_yticks(np.arange(20)+0.5)
ax1.set_xticklabels(aa)
ax1.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax1)
ax1.set_title('Impact of zeroing the dipeptide on\nperformance of linear model', fontsize=16);
import warnings
warnings.filterwarnings('ignore')
from sklearn.metrics import roc_auc_score, make_scorer
from xgboost import XGBClassifier
# booster parameters
param = {'max_depth': 3,
'eta': 1,
'silent': 1,
'learning_rate': 0.1,
'objective': 'binary:logistic',
'eval_metric': 'auc',
'nthread':4,
'subsample':0.8,
'booster':'gbtree',
'n_estimators': 100}
performances, models = [], []
for i in range(10):
# subsample the negative set and join with positive set
negative_sample = np.concatenate(subsample_negatives(n_train, p_train.shape[0]))
positive_sample = p_train
X = np.vstack([negative_sample, positive_sample])
y = np.hstack([np.zeros(negative_sample.shape[0]), np.ones(positive_sample.shape[0])])
# Monitoring training performance
eval_set = [(X,y), (X_valid, y_valid)] # should change this to X_set
eval_metric = ["auc"] #, "logloss", "error"]
early_stopping_rounds=10
# fit model no training data
model = XGBClassifier(**param)
model.fit(X,y, eval_set=eval_set,
eval_metric=eval_metric,
early_stopping_rounds=early_stopping_rounds,
verbose=False)
y_pred = model.predict(X_valid)
# Am I overfitting? How was performance in the training data?
y_pred2 = model.predict(X)
# evaluate predictions
on_training = roc_auc_score(y, y_pred2)
on_validation = roc_auc_score(y_valid, y_pred)
performances.append([on_training, on_validation])
models.append(model)
l = len(models[0].evals_result()['validation_0']['auc'])
plt.plot(np.arange(l), models[0].evals_result()['validation_0']['auc'], label='training')
plt.plot(np.arange(l), models[0].evals_result()['validation_1']['auc'], label='validation')
plt.legend()
f, ax = plt.subplots(1, figsize=(6,5))
im = plt.pcolor(models[9].feature_importances_.reshape(20,20), cmap='hot_r')
ax.set_xticks(np.arange(20)+0.5)
ax.set_yticks(np.arange(20)+0.5)
ax.set_xticklabels(aa)
ax.set_yticklabels(aa)
plt.colorbar(im, shrink=0.5, ax=ax)
import warnings
warnings.filterwarnings('ignore')
from sklearn.metrics import roc_auc_score, make_scorer
from xgboost import XGBClassifier
# booster parameters
param = {'max_depth': 3,
'eta': 1,
'silent': 1,
'learning_rate': 0.1,
'objective': 'binary:logistic',
'eval_metric': 'auc',
'nthread':4,
'subsample':0.8,
'booster':'gbtree',
'n_estimators': 100}
performances, models = [], []
for i in range(10):
# subsample the negative set and join with positive set
negative_sample = np.concatenate(subsample_negatives(n_train, p_train.shape[0]))
positive_sample = p_train
X = np.vstack([negative_sample, positive_sample])
y = np.hstack([np.zeros(negative_sample.shape[0]), np.ones(positive_sample.shape[0])])
# Monitoring training performance
eval_set = [(X,y), (X_valid, y_valid)] # should change this to X_set
eval_metric = ["auc"] #, "logloss", "error"]
early_stopping_rounds=10
# fit model no training data
model = XGBClassifier(**param)
model.fit(X,y, eval_set=eval_set,
eval_metric=eval_metric,
early_stopping_rounds=early_stopping_rounds,
verbose=False)
y_pred = model.predict(X_valid)
# Am I overfitting? How was performance in the training data?
y_pred2 = model.predict(X)
# evaluate predictions
on_training = roc_auc_score(y, y_pred2)
on_validation = roc_auc_score(y_valid, y_pred)
performances.append([on_training, on_validation])
models.append(model)
import matplotlib.gridspec as gridspec
plt.figure(figsize=(15,5))
gs = gridspec.GridSpec(1,2,
width_ratios = [1,2])
plt.subplot(gs[0])
l = len(models[0].evals_result()['validation_0']['auc'])
plt.plot(np.arange(l), models[0].evals_result()['validation_0']['auc'], label='training')
plt.plot(np.arange(l), models[0].evals_result()['validation_1']['auc'], label='validation')
plt.legend()
plt.subplot(gs[1])
plt.bar(aa, models[9].feature_importances_)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.