Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
13,300
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-hr4', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
13,301
|
<ASSISTANT_TASK:>
Python Code:
from workflow.engine import GenericWorkflowEngine
my_engine = GenericWorkflowEngine()
from functools import wraps
def print_data(obj, eng):
Print the data found in the token.
print(obj.data)
def add_data(number_to_add):
Add number_to_add to obj.data.
@wraps(add_data)
def _add_data(obj, eng):
obj.data += number_to_add
return _add_data
my_workflow_definition = [
add_data(1),
print_data
]
class MyObject:
def __init__(self, data):
self._data = data
@property
def data(self):
return self._data
@data.setter
def data(self, value):
print(self)
self._data = value
def __repr__(self):
return "{0} : {1}".format(hex(id(self)), self._data)
my_object0 = MyObject(0)
my_object1 = MyObject(1)
my_engine.callbacks.replace(my_workflow_definition)
my_engine.process([my_object0, my_object1])
my_engine.process([my_object0, my_object1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Instantiate a workflow engine
Step4: Create tasks
Step5: Create a workflow definition
Step6: Define tokens
Step7: Run the engine
Step8: Resuing the engine
|
13,302
|
<ASSISTANT_TASK:>
Python Code:
import xray_vision
import xray_vision.mpl_plotting as mpl_plot
import skbeam.core.speckle as xsvs
import skbeam.core.roi as roi
import skbeam.core.correlation as corr
import skbeam.core.utils as utils
import numpy as np
import os, sys
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
from matplotlib.colors import LogNorm
from mpl_toolkits.axes_grid1.axes_grid import ImageGrid
interactive_mode = False
if interactive_mode:
%matplotlib notebook
else:
%matplotlib inline
backend = mpl.get_backend()
%run download.py
data_dir = "Duke_data/"
duke_rdata = np.load(data_dir+"duke_img_1_5000.npy")
duke_dark = np.load(data_dir+"duke_dark.npy")
duke_data = []
for i in range(duke_rdata.shape[0]):
duke_data.append(duke_rdata[i] - duke_dark)
duke_ndata=np.asarray(duke_data)
# load the mask(s) and mask the data
mask1 = np.load(data_dir+"new_mask4.npy")
mask2 = np.load(data_dir+"Luxi_duke_mask.npy")
N_mask = ~(mask1 + mask2)
mask_data = N_mask*duke_ndata
# get the average image
avg_img = np.average(duke_ndata, axis=0)
# if matplotlib version 1.5 or later
if float('.'.join(mpl.__version__.split('.')[:2])) >= 1.5:
cmap = 'viridis'
else:
cmap = 'CMRmap'
# plot the average image data after masking
plt.figure()
plt.imshow(N_mask*avg_img, vmax=1e0, cmap=cmap)
plt.title("Averaged masked data for Duke Silica Gel ")
plt.colorbar()
plt.show()
inner_radius = 26 # radius of the first ring
width = 1 # width of each ring
spacing = 0 # no spacing between rings
num_rings = 4 # number of rings
center = (133, 143) # center of the spckle pattern
# find the edges of the required rings
edges = roi.ring_edges(inner_radius, width, spacing, num_rings)
edges
dpix = 0.055 # The physical size of the pixels
lambda_ = 1.5498 # wavelength of the X-rays
Ldet = 2200. # # detector to sample distance
two_theta = utils.radius_to_twotheta(Ldet, edges*dpix)
q_val = utils.twotheta_to_q(two_theta, lambda_)
q_val
q_ring = np.mean(q_val, axis=1)
q_ring
rings = roi.rings(edges, center, avg_img.shape)
images_sets = (mask_data, )
ring_mask = rings*N_mask
# plot the figure
fig, axes = plt.subplots(figsize=(5, 5))
axes.set_title("Ring Mask")
im = mpl_plot.show_label_array(axes, ring_mask, cmap=cmap)
plt.show()
max_cts = roi.roi_max_counts(images_sets, ring_mask)
max_cts
spe_cts_all, std_dev = xsvs.xsvs(images_sets, ring_mask, timebin_num=2,
number_of_img=30, max_cts=max_cts)
time_steps = utils.geometric_series(2, 30)
time_steps
mean_int_sets, index_list = roi.mean_intensity(mask_data, ring_mask)
plt.figure(figsize=(8, 8))
plt.title("Mean intensity of each ring")
for i in range(num_rings):
plt.plot(mean_int_sets[:,i], label="Ring "+str(i+1))
plt.legend()
plt.show()
mean_int_ring = np.mean(mean_int_sets, axis=0)
mean_int_ring
num_times = 6
num_rois=num_rings
norm_bin_edges, norm_bin_centers = xsvs.normalize_bin_edges(num_times,
num_rois, mean_int_ring, max_cts)
fig, axes = plt.subplots(figsize=(6, 6))
axes.set_xlabel("K/<K>")
axes.set_ylabel("P(K)")
for i in range(4):
art, = axes.plot(norm_bin_edges[i, 0][:-1], spe_cts_all[i, 0], '-o', label=str(time_steps[i])+" ms")
axes.set_xlim(0, 4)
axes.legend()
plt.title("1st q ring 0.0026859 (1/Angstroms)")
plt.show()
fig, axes = plt.subplots(figsize=(6, 6))
axes.set_xlabel("K/<K>")
axes.set_ylabel("P(K)")
for i in range(4):
art, = axes.plot(norm_bin_edges[i, 1][:-1], spe_cts_all[i, 1], '-o', label=str(time_steps[i])+" ms")
axes.legend()
axes.set_xlim(0, 4)
plt.title("2nd q ring 0.00278726 (1/Angstroms)")
plt.show()
time_steps_5 = utils.geometric_series(5, 50)
time_steps_5
p_K, std_dev_5 = xsvs.xsvs(images_sets, ring_mask, timebin_num=5,
number_of_img=50, max_cts=max_cts)
fig, axes = plt.subplots(figsize=(6, 6))
axes.set_xlabel("K/<K>")
axes.set_ylabel("P(K)")
for i in range(3):
art, = axes.plot(norm_bin_edges[i, 0][:-1], p_K[i, 0], '-o', label=str(time_steps_5[i])+" ms")
axes.set_xlim(0, 4)
axes.legend()
plt.title("1st q ring 0.0026859 (1/Angstroms)")
plt.show()
fig, axes = plt.subplots(figsize=(6, 6))
axes.set_xlabel("K/<K>")
axes.set_ylabel("P(K)")
for i in range(3):
art, = axes.plot(norm_bin_edges[i, 1][:-1], p_K[i, 1], '-o', label=str(time_steps_5[i])+" ms")
axes.legend()
axes.set_xlim(0, 4)
plt.title("2nd q ring 0.00278726 (1/Angstroms)")
plt.show()
fig, axes = plt.subplots(figsize=(6, 6))
axes.set_xlabel("K/<K>")
axes.set_ylabel("P(K)")
for i in range(3):
art, = axes.plot(norm_bin_edges[i, 2][:-1], p_K[i, 2], '-o', label=str(time_steps_5[i])+" ms" )
axes.set_xlim(0, 4)
axes.legend()
plt.title("3rd q ring 0.00288861 (1/ Angstroms)")
plt.show()
fig, axes = plt.subplots(figsize=(6, 6))
axes.set_xlabel("K/<K>")
axes.set_ylabel("P(K)")
for i in range(3):
art, = axes.plot(norm_bin_edges[i, 3][:-1], p_K[i, 3], '-o', label=str(time_steps_5[i])+" ms")
axes.set_xlim(0, 4)
axes.legend()
plt.title("4th q ring 0.0298997 (1/ Angstroms)")
plt.show()
import skbeam
print(skbeam.__version__)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Easily switch between interactive and static matplotlib plots¶
Step2: This data provided by Dr. Andrei Fluerasu
Step3: Create the Rings Mask¶
Step4: Convert the edge values of the rings to q ( reciprocal space)
Step5: Create a labeled array using roi.rings
Step6: Find the brightest pixel in any ROI in any image in the image set.
Step7: Everything looks good, next X-ray speckle visibilty spectroscopy
Step8: Find the integration times
Step9: Get the mean intensity of each ring
Step10: Get the normalized bin edges and bin centers for each integration time.
Step11: 1st q ring 0.0026859 (1/Angstroms)
Step12: 2nd q ring 0.00278726 (1/Angstroms)¶
Step13: X-ray speckle visibilty spectroscopy(XSVS) for differnt time steps
Step14: XSVS results for new integartion times 1ms, 5ms and 25ms
Step15: Plot the results for each Q ring
Step16: 2nd q ring 0.00278726 (1/Angstroms)
Step17: 3rd q ring 0.00288861 (1/ Angstroms)
Step18: 4th q ring 0.0298997 (1/ Angstroms)
|
13,303
|
<ASSISTANT_TASK:>
Python Code:
from Bio.Blast import NCBIWWW
help(NCBIWWW.qblast)
from Bio.Blast import NCBIWWW
result_handle = NCBIWWW.qblast("blastn", "nt", "8332116")
from Bio.Blast import NCBIWWW
fasta_string = open("data/m_cold.fasta").read()
result_handle = NCBIWWW.qblast("blastn", "nt", fasta_string)
from Bio.Blast import NCBIWWW
from Bio import SeqIO
record = SeqIO.read("data/m_cold.fasta", format="fasta")
result_handle = NCBIWWW.qblast("blastn", "nt", record.seq)
from Bio.Blast import NCBIWWW
from Bio import SeqIO
record = SeqIO.read("data/m_cold.fasta", format="fasta")
result_handle = NCBIWWW.qblast("blastn", "nt", record.format("fasta"))
with open("data/my_blast.xml", "w") as out_handle:
out_handle.write(result_handle.read())
result_handle.close()
result_handle = open("data/my_blast.xml")
from Bio.Blast.Applications import NcbiblastxCommandline
help(NcbiblastxCommandline)
blastx_cline = NcbiblastxCommandline(query="opuntia.fasta", db="nr", evalue=0.001,
outfmt=5, out="opuntia.xml")
blastx_cline
print(blastx_cline)
# stdout, stderr = blastx_cline()
from Bio.Blast import NCBIWWW
result_handle = NCBIWWW.qblast("blastn", "nt", "8332116")
result_handle = open("data/my_blast.xml")
from Bio.Blast import NCBIXML
blast_record = NCBIXML.read(result_handle)
from Bio.Blast import NCBIXML
blast_records = NCBIXML.parse(result_handle)
from Bio.Blast import NCBIXML
blast_records = NCBIXML.parse(result_handle)
blast_record = next(blast_records)
print(blast_record.database_sequences)
# # ... do something with blast_record
for blast_record in blast_records:
#Do something with blast_records
blast_records = list(blast_records)
blast_records = list(blast_records)
from Bio.Blast import NCBIXML
blast_records = NCBIXML.parse(result_handle)
E_VALUE_THRESH = 0.04
from Bio.Blast import NCBIXML
result_handle = open("data/my_blast.xml", "r")
blast_records = NCBIXML.parse(result_handle)
for alignment in blast_record.alignments:
for hsp in alignment.hsps:
if hsp.expect < E_VALUE_THRESH:
print("****Alignment****")
print("sequence:", alignment.title)
print("length:", alignment.length)
print("e value:", hsp.expect)
print(hsp.query[0:75] + "...")
print(hsp.match[0:75] + "...")
print(hsp.sbjct[0:75] + "...")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that the default settings on the NCBI BLAST website are not quite
Step2: Alternatively, if we have our query sequence already in a FASTA
Step3: We could also have read in the FASTA file as a SeqRecord and then
Step4: Supplying just the sequence means that BLAST will assign an identifier
Step5: This approach makes more sense if you have your sequence(s) in a
Step6: After doing this, the results are in the file my_blast.xml and the
Step7: Now that we’ve got the BLAST results back into a handle again, we are
Step8: In this example there shouldn’t be any output from BLASTX to the
Step9: If instead you ran BLAST some other way, and have the BLAST output (in
Step10: Now that we’ve got a handle, we are ready to parse the output. The code
Step11: or, if you have lots of results (i.e., multiple query sequences)
Step12: Just like Bio.SeqIO and Bio.AlignIO (see
Step13: Or, you can use a for-loop. Note though that you can step through the BLAST records only once. Usually, from each BLAST record you would save the information that you are interested in. If you want to save all returned BLAST records, you can convert the iterator into a list
Step14: Now you can access each BLAST record in the list with an index as usual. If your BLAST file is huge though, you may run into memory problems trying to save them all in a list.
Step15: I guess by now you’re wondering what is in a BLAST record.
|
13,304
|
<ASSISTANT_TASK:>
Python Code:
# Import SPI rack and D5a module
from spirack import SPI_rack, D5a_module
COM_speed = 1e6 # Baud rate, doesn't matter much
timeout = 1 # In seconds
spi_rack = SPI_rack('COM4', COM_speed, timeout)
spi_rack.unlock() # Unlock the controller to be able to send data
D5a = D5a_module(spi_rack, module=2, reset_voltages=True)
D5a.change_span_update(1, D5a.range_2V_bi)
stepsize = D5a.get_stepsize(2)
print("Stepsize: " + str(stepsize) + " V")
# Changing the output by voltage
D5a.set_voltage(0, 1.9)
# Changing the output by ditigal values
D5a.change_value_update(1, 165535)
spi_rack.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Open the SPI rack connection and unlock the controller. This is necessary after bootup of the controller module. If not unlocked, no communication with the modules can take place. The virtual COM port baud rate is irrelevant as it doesn't change the actual speed. Timeout can be changed, but 1 second is a good value.
Step2: Create a new D5a module object at the correct (set) module address using the SPI object. By default the module resets the output voltages to 0 Volt. Before it does this, it will read back the current value. If this value is non-zero it will slowly ramp it to zero. If reset_voltages = False then the output will not be changed.
Step3: The output span of the DAC can be set in software to the following ranges
Step4: The voltage can be set using the set_voltage function. If you want the output to be precisely equal to the set value, the voltage should be an integer multiple of the stepsize. Especially if sweeps are performed this is recommended. Otherwise the steps might nog be equidistant. Here we set the voltage of output 1 to 1.9 Volt. In the software the DAC count starts at 0, while on the front it starts at 1.
Step5: When done with the measurement, it is recommended to close the SPI Rack connection. This will allow other measurement scripts to acces the device.
|
13,305
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X = cancer.data
y = cancer.target
from sklearn.tree import DecisionTreeClassifier
def decision_stump(features, labels):
clf = DecisionTreeClassifier(max_depth=1, random_state=123)
clf.fit(features, labels)
predictions = clf.predict(features)
return predictions
import numpy as np
def get_accuracy(predictions, ground_truth):
equality = (predictions==ground_truth)
accuracy = np.mean(equality)
return accuracy*100
prediction = decision_stump(X, y)
accuracy = get_accuracy(prediction, y)
print(accuracy)
def mistakes(predictions, ground_truth):
equality = (prediction==ground_truth)
misclass = np.where(equality==False)
correct = np.where(equality==True)
return misclass, correct
def easy_hard_split(features, labels, predictions):
hard_index, easy_index = mistakes(predictions, labels)
hard_X, hard_y = features[hard_index], labels[hard_index]
easy_X, easy_y = features[easy_index], labels[easy_index]
return hard_X, easy_X, hard_y, easy_y
hard_X, easy_X, hard_y, easy_y = easy_hard_split(X, y, prediction)
iterations = 1000
for i in range(iterations):
stump = decision_stump(X, y)
predictions = get_accuracy(stump, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build the Decision Stump
Step2: Get an Accuracy Result
Step3: Demonstrate for a single iteration
Step4: Extract Incorrect Classifications
Step5: Apply Boosting
|
13,306
|
<ASSISTANT_TASK:>
Python Code:
def smoothListGaussian(list,degree=5):
list =[list[0]]*(degree-1) + list + [list[-1]]*degree
window=degree*2-1
weight=np.array([1.0]*window)
weightGauss=[]
for i in range(window):
i=i-degree+1
frac=i/float(window)
gauss=1/(np.exp((4*(frac))**2))
weightGauss.append(gauss)
weight=np.array(weightGauss)*weight
smoothed=[0.0]*(len(list)-window)
for i in range(len(smoothed)):
smoothed[i]=sum(np.array(list[i:i+window])*weight)/sum(weight)
return smoothed
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Generate a noisy signal to be filtered.
t = np.linspace(-1, 1, 201)
x = np.sin(2 * np.pi * t)
xn = x + np.random.normal(size=len(t)) * 0.08
# Make the plot.
plt.figure(figsize=(10,5))
plt.plot(t, xn, 'b', linewidth=1.75, alpha=0.75)
list_xn = list(xn)
original = smoothListGaussian(list_xn)
plt.plot(t, original, 'r');
plt.plot(t, x, 'g');
# Generate a noisy signal to be filtered.
t = np.linspace(-1, 1, 201)
x = (np.sin(2 * np.pi * 0.75 * t*(1-t) + 2.1) + 0.1*np.sin(2 * np.pi * 1.25 * t + 1) +
0.18*np.cos(2 * np.pi * 3.85 * t))
xn = x + np.random.randn(len(t)) * 0.08
# Make the plot.
plt.figure(figsize=(10,5))
plt.plot(t, xn, 'b', linewidth=1.75, alpha=0.75)
list_xn = list(xn)
original = smoothListGaussian(list_xn)
plt.plot(t, original, 'r');
plt.plot(t, x, 'g');
def smoothListGaussian2(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
myarray = np.hstack([ [myarray[0]]*(degree-1),myarray,[myarray[-1]]*degree])
window=degree*2-1
# Build the weights filter
weight=np.array([1.0]*window)
weightGauss=[]
for i in range(window):
i=i-degree+1
frac=i/float(window)
gauss=np.exp(-(4*frac)**2)
weightGauss.append(gauss)
weight=np.array(weightGauss)*weight
# create the smoothed array with a convolution with the window
smoothed=np.array([0.0]*(len(myarray)-window))
for i in range(len(smoothed)):
smoothed[i]=sum(myarray[i:i+window]*weight)/sum(weight)
return smoothed
np.all(original-smoothListGaussian2(xn)==0)
def smoothListGaussian3(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
myarray = np.pad(myarray, (degree-1,degree), mode='edge')
window=degree*2-1
# Build the weights filter
weight=np.ones(window)
weightGauss=[]
for i in range(window):
i=i-degree+1
frac=i/float(window)
gauss=np.exp(-(4*frac)**2)
weightGauss.append(gauss)
weight=np.array(weightGauss)*weight
# create the smoothed array with a convolution with the window
smoothed=np.zeros((len(myarray)-window))
for i in range(len(smoothed)):
smoothed[i]=sum(myarray[i:i+window]*weight)/sum(weight)
return smoothed
#Checking...
print("Still getting the same results...? ",np.all(original-smoothListGaussian3(xn)==0))
def smoothListGaussian4(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
myarray = np.pad(myarray, (degree-1,degree), mode='edge')
window=degree*2-1
# Build the weights filter
weight=np.ones(window)
for i in range(window):
frac=(i-degree+1)/float(window)
weight[i] = np.exp(-(4*frac)**2)
# create the smoothed array with a convolution with the window
smoothed=np.zeros((len(myarray)-window))
for i in range(len(smoothed)):
smoothed[i]=sum(myarray[i:i+window]*weight)/sum(weight)
return smoothed
#Checking...
print("Still getting the same results...? ",np.all(original-smoothListGaussian4(xn)==0))
def smoothListGaussian5(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
myarray = np.pad(myarray, (degree-1,degree), mode='edge')
window=degree*2-1
# Build the weights filter
weight=np.arange(-degree+1, degree)/window
weight = np.exp(-(16*weight**2))
weight /= weight.sum()
# create the smoothed array with a convolution with the window
smoothed=np.zeros((len(myarray)-window))
for i in range(len(smoothed)):
smoothed[i]=sum(myarray[i:i+window]*weight)/sum(weight)
return smoothed
#Checking...
print("Still getting the same results...? ",np.all(original-smoothListGaussian5(xn)==0))
plt.plot(original-smoothListGaussian4(xn));
print("Still getting the same results...? ",np.allclose(original, smoothListGaussian5(xn)))
def smoothListGaussian6(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
myarray = np.pad(myarray, (degree-1,degree-1), mode='edge')
window=degree*2-1
# Build the weights filter
weight=np.arange(-degree+1, degree)/window
weight = np.exp(-(16*weight**2))
weight /= weight.sum()
# create the smoothed array with a convolution with the window
smoothed = np.convolve(myarray, weight, mode='valid')
return smoothed
#Checking...
print("Still getting the same results...? ",np.allclose(original, smoothListGaussian6(xn)))
%timeit smoothListGaussian(list_xn)
%timeit smoothListGaussian2(xn)
%timeit smoothListGaussian3(xn)
%timeit smoothListGaussian4(xn)
%timeit smoothListGaussian5(xn)
%timeit smoothListGaussian6(xn)
from scipy import signal, ndimage
# Here we check that the Gaussian window in the signal module is producing the same window
# that we were manually doing
degree = 5
window=degree*2-1
# Build the weights filter
weight=np.arange(-degree+1, degree)/window
weight = np.exp(-(16*weight**2))
print(weight)
print(signal.gaussian(window, std=window/np.sqrt(32)))
def smoothListGaussian7(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
window=degree*2-1
# Build the weights filter
weight = signal.gaussian(window, std=window/np.sqrt(32))
weight /= weight.sum()
# create the smoothed array with a convolution with the window
smoothed = ndimage.convolve1d(myarray, weight, mode='nearest')
return smoothed
#Checking...
print("Still getting the same results...? ",np.allclose(original, smoothListGaussian7(xn)))
%timeit smoothListGaussian7(xn)
%timeit ndimage.filters.gaussian_filter1d(xn, sigma=window/np.sqrt(32), mode='nearest')
plt.figure(figsize=(10,5))
plt.plot(t, original-ndimage.filters.gaussian_filter1d(xn, sigma=window/np.sqrt(32), mode='nearest'));
plt.title('original - gaussian_filter1d')
plt.figure(figsize=(10,5));
plt.plot(t, xn, 'b', linewidth=1.75, alpha=0.75);
plt.plot(t, original, 'r-', linewidth=1.75, alpha=0.75);
plt.plot(t, ndimage.filters.gaussian_filter1d(xn, sigma=window/np.sqrt(32), mode='nearest'),
'g--',linewidth=1.75, alpha=0.75);
from numba import jit
@jit
def smoothListGaussian_numba(myarray):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
degree = 5
myarray = np.pad(myarray, (degree-1,degree), mode='edge')
window = degree*2-1
# Build the weights filter
weight=np.zeros(window)
for i in range(window):
frac=(i-degree+1)/window
weight[i]=np.exp(-(4*frac)**2)
weight /= weight.sum()
# create the smoothed array with a convolution with the window
smoothed=np.zeros(myarray.shape[0]-window)
for i in range(smoothed.shape[0]):
for j in range(window):
smoothed[i] += myarray[i+j]*weight[j]
return smoothed
print("Still getting the same results...? ",np.allclose(original, smoothListGaussian_numba(xn)))
%timeit smoothListGaussian_numba(xn)
np.random.normal?
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This function is syntactically correct and it works. Let's test it with a data set. The same one used in the scipy cookbook (http
Step3: Despite working, this code has several shortcomming. Our task here will be to improve the code for readability and efficiency. Because premature optimization is the source of all evil we will first focus on just making the code more clear and elegant. A first look at the code shows
Step4: Now we check if it works and it gives the same result
Step6: For clarity, we can use Numpy functions ones, and zeros to create some of the arrays. pad will also be useful to pad the array instead of using hstack. Sometimes it's enough to know English to guess the existence of certain functions, such as pad.
Step8: Modifying the value of the loop varible i is also rather ugly. We could directly define frac correctly.
Step10: At this point, we see that the values of the weight array only depend on i, not on previous values. So that we can create them with an array funcion. The i values go from -degree+1 to degree. That is, if our degree is 3 we want a range from -2 to 2. Then, we will have degree*2-1 = 5 windows centered around 0. That's all we need to know to remove that loop.
Step11: Ooops! But we're not getting the same results any more! Let's plot the difference to see what is going on.
Step12: OK! So the results are not exactly the same but almost, due to floating point errors. We could plot the results each time, but we cal also use the numpy function allclose to check for correctness
Step14: Our last step of beautifying the code will be to check whether Numpy has a convolution function so that we do not need to do it in a for loop. Here, one cas to check the documentation to make sure the treatment of the boundaries is the same. We also needed to pad the initial array with one less element, to get the same final number of elements in smoothed.
Step15: Just by making the code more elegant, we probably have improved its performance
Step17: Sucess! We also see that the big change came from removing the loop that was performing the convolution.
Step18: Of course, if we wanted to perform a gaussian filtering we could directly call ndimage.filters.gaussian_filter1d. Remark that the results are not exactly the same, because here we do not determine the window size. However they are approximately equal.
Step20: The take home message from this exercise is
|
13,307
|
<ASSISTANT_TASK:>
Python Code::
df['total'] = df['col_1'] + df['col_2']
df = df.pipe(lambda x: x.div(x['total'], axis='index')).applymap('{:.0%}'.format)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
13,308
|
<ASSISTANT_TASK:>
Python Code:
# A bit of setup
# import numpy as np
# import matplotlib.pyplot as plt
# from cs231n.classifiers.neural_net import TwoLayerNet
# %matplotlib inline
# plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
# plt.rcParams['image.interpolation'] = 'nearest'
# plt.rcParams['image.cmap'] = 'gray'
# # for auto-reloading external modules
# # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
# %load_ext autoreload
# %autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
import numpy as np
from cs231n.classifiers.neural_net import TwoLayerNet
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
print X
print y
return X, y
net = init_toy_model()
X, y = init_toy_data()
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
print
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=True)
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=300, verbose=True)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
#plt.plot(stats['loss_history'])
#plt.xlabel('iteration')
#plt.ylabel('training loss')
#plt.title('Training Loss history')
#plt.show()
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
pass
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing a Neural Network
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Step6: Train the network
Step8: Load the data
Step9: Train a network
Step10: Debug the training
Step11: Tune your hyperparameters
Step12: Run on the test set
|
13,309
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from pandas_datareader import data, wb
import datetime
# We will look at stock prices over the past year, starting at January 1, 2016
start = datetime.datetime(2016,1,1)
end = datetime.date.today()
# Let's get Apple stock data; Apple's ticker symbol is AAPL
# First argument is the series we want, second is the source ("yahoo" for Yahoo! Finance), third is the start date, fourth is the end date
apple = data.get_data_yahoo("AAPL", start, end)
apple.head()
import pylab as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (15, 9) # Change the size of plots
## your plot code here
microsoft = data.get_data_yahoo("MSFT", start, end)
google = data.get_data_yahoo("GOOG", start, end)
stocks = pd.DataFrame({"AAPL": apple["Adj Close"],
"MSFT": microsoft["Adj Close"],
"GOOG": google["Adj Close"]})
stocks.head()
# plot here
# stocks['AAPL']...
# use apply function
stock_return = stocks # change this line
stock_return.head()
# plot returns
url = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies'
## use sp500 = pd.read_html...
## Loop through tickers
## Use apply again
# stock_return_sp500 = df_sp500 # change this line
# plot hist
# get high-performance stocks
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pandas Basics
Step2: Add More Stocks
Step3: Plot Price of all three stocks
Step4: Apply Rolling Window
Step5: Profit
Step6: S&P 500
Step7: Profit
Step8: Profit Histogram
Step9: Use Pandas to find high-performance stocks
|
13,310
|
<ASSISTANT_TASK:>
Python Code:
# This is probably due to a unit conversion in a multiplicative prefactor
# This multiplicative prefactor is based on nanometers
r_min = 0.14
r_max = 1.0
print (1/r_min - 1/r_max)
# This multiplicative prefactor is based on angstroms
r_min = 1.4
r_max = 10.0
print (1/r_min - 1/r_max)
4*np.pi
# This is after multiplication by 4*pi
# Sum for atom 0: 1.55022, 2.96246
# Sum for atom 1: 1.56983, 2.96756
# Sum for atom 2: 1.41972, 2.90796
# Sum for atom 3: 1.45936, 3.02879
# Sum for atom 4: 2.05316, 3.32989
# Sum for atom 5: 1.5354, 3.06405
# Sum for atom 6: 1.43417, 3.02438
# Sum for atom 7: 1.85508, 3.21875
# Sum for atom 8: 2.06909, 3.42013
# Sum for atom 9: 2.4237, 4.32524
# Sum for atom 10: 1.9603, 4.54512
# Sum for atom 11: 2.18017, 3.99349
# Sum for atom 12: 2.19774, 3.8152
# Sum for atom 13: 2.02152, 3.7884
# Sum for atom 14: 2.05662, 4.10305
# Sum for atom 15: 2.65659, 4.34157
# Sum for atom 16: 2.81839, 4.63688
# Sum for atom 17: 2.90653, 4.40561
# Sum for atom 18: 2.37779, 4.38092
# Sum for atom 19: 2.17795, 4.01659
# Sum for atom 20: 1.77652, 3.25573
# Sum for atom 21: 1.22359, 3.24223
# Sum for atom 22: 1.2336, 3.3604
# Sum for atom 23: 1.21771, 3.1483
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Switching from nanometers to angstroms makes the multiplicative prefactor smaller, which is opposite of the desired effect!
Step2: Igrid[atomI] appears to be off by a factor of 4*pi!
|
13,311
|
<ASSISTANT_TASK:>
Python Code:
# This exercise is mostly for us to understand what kind of data we have and then
# run some simple stats on the fields/values in the data. Pandas will be great for that
import pandas as pd
pd.__version__
# Set default figure sizes
pylab.rcParams['figure.figsize'] = (16.0, 5.0)
# Lets take a peek at our system.log, with IPython you
# can execute system commands with '!' (also see %alias)
!head /var/log/system.log
# Okay so weird timestamp, machine name and then random stuff...
import dateutil.parser
# Helper function to make compact, composite labels for the syslog rows
def make_label(features):
return unicode(':'.join([f[:6] for f in features]), 'utf-8').encode('ascii','ignore')
# Now process the syslog file, part of the challenge is parsing the darn thing.
# Typically each row in a syslog will have a timestamp first and then some random
# set of other fields split on whitespace. We're going to carefully pull out the
# timestamp but then just treat everything else as bag of tokens (sparse data).
date_index = []
with open('/var/log/system.log') as syslog:
syslog_rows = syslog.readlines()
syslog_events = []
for row in syslog_rows:
split_list = row.split()
date_string = ' '.join(split_list[:3])
try:
date = dateutil.parser.parse(date_string)
except (ValueError, TypeError) as e:
continue # Going to skip rows where we can't get a timestamp
features = split_list[4:]
syslog_events.append({'features': features, 'label': make_label(features), 'type': features[0].split('[')[0]})
date_index.append(date)
dataframe = pd.DataFrame(syslog_events, index=date_index)
# Make sure our timestamp got pulled/parsed/converted correctly
type(dataframe.index[0])
dataframe.head(5)
# So what do we have? Datetime is our index, features is just the
# syslog row split by whitespace and placed into a list. The label
# is just the features (truncated) and flattened with : separators.
# 'type' is just the syslog identifier right after the datetime and machine.
# Note: The parsing of the syslog is more art than science and syslogs
# can vary greatly, but we haven't done any super magic here,
# just basically thrown the row data into some bins...
dataframe.tail(5)
# We can get a count of all the unique values by running value_counts()
dataframe['label'].value_counts()
# Okay the breakout below gives us a sense for our challenge...
# Realize that we don't want the 'last message' events
dataframe = dataframe[dataframe['type'] != 'last']
# Now prepare the data for plotting by pivoting on the
# feature to create a new column (series) for each value
# Only pull out the top 7 syslog event types
topN = dataframe['type'].value_counts()[:7].index
subset = dataframe[dataframe['type'].isin(topN)]
print 'Subset: %d rows %d columns' % subset.shape
# We're going to add a new column called value (needed for pivot). This
# is a bit dorky, but needed as the new columns that get created should
# really have a value in them, also we can use this as our value to sum over.
subset['count'] = 1
pivot = pd.pivot_table(subset, values='count', rows=subset.index, cols=['type'], fill_value=.01)
by = lambda x: lambda y: getattr(y, x)
grouped = pivot.groupby([by('hour')]).sum()
grouped.plot(kind='bar', stacked=True)
grouped.plot()
# Even for small syslogs the number of similarity pairs to compute quickly
# becomes quite large O(N**2), so for 100k rows that's 10 billion possible
# pairs. Using Banded MinHash will drastically reduce the number of
# candidates that we have to compute.
import data_hacking.lsh_sims as lsh_sims
# Note: The parameters here are setup for feeding the results into a Hierarchical
# Clustering algorithm, which needs as many similarities as you can get.
# In general you'd parameters like num_hashes:20, lsh_bands:5 lsh_rows:4
# Note: lsh_bands*lsh_rows ^must^ equal num_hashes
params = {'num_hashes':20, 'lsh_bands':20, 'lsh_rows':1, 'drop_duplicates':True}
lsh = lsh_sims.LSHSimilarities(dataframe['features'], mh_params=params)
sims = lsh.batch_compute_similarities(distance_metric='jaccard', threshold=.2)
# Lets look at the difference between Jaccard Similarity and Levenshtein Similarity
# So here similarity is a normalized measure of inverse distance...
print 'Jaccard Index (Sim): %f ' % lsh.jaccard_sim(['a','b','c','d'], ['a','b','d','c'])
print 'Levenshtein Distance: %f ' % lsh.levenshtein(['a','b','c','d'], ['a','b','d','c'])
print 'Levenshtein (Sim): %f ' % lsh.l_sim(['a','b','c','d'], ['a','b','d','c'])
# One more example for intuition (also note they don't have to be the same size)
print 'Jaccard Index (Sim): %f ' % lsh.jaccard_sim(['a','b','c'], ['a','b','c','x'])
print 'Levenshtein Distance: %f ' % lsh.levenshtein(['a','b','c'], ['a','b','c','x'])
print 'Levenshtein (Sim): %f ' % lsh.l_sim(['a','b','c'], ['a','b','c','x'])
# Okay now that we have the similarities between all the rows in our syslog
# we can start to investigate the results.
sims.sort(reverse=True)
print '*** Top sims ***'
sims[:10]
#sims[-10:]
print dataframe.iloc[376]['features']
print dataframe.iloc[1090]['features']
# The two feature sets should look quite similar (recall that this
# is just our syslog row split on white space and thrown into a list)
# So now for any row in our syslog we can see what rows are highly
# similar to that row.
query_item = ['Google', 'Chrome', 'Helper[11545]:', 'Process', 'unable', 'to', 'create', 'connection', 'because', 'the', 'sandbox', 'denied', 'the', 'right', 'to', 'lookup', 'com.apple.coreservices.launchservicesd', 'and', 'so', 'this', 'process', 'cannot', 'talk', 'to', 'launchservicesd.', ':', 'LSXPCClient.cp', '#426', '___ZN26LSClientToServerConnection21setupServerConnectionEiPK14__CFDictionary_block_invoke()', 'q=com.apple.main-thread']
lsh.top_N(query_item,dataframe['label'], 5)
# Note the query object doesn't have all the original features
query_item = ['Google', 'Chrome', 'Process', 'unable', 'to', 'sandbox']
lsh.top_N(query_item,dataframe['label'], 5)
# A simple example for illustration
import data_hacking.hcluster as hcluster
toy_data = [['a','b','c','d'],['a','b','d'],['a','b','e','d'],['a','b','f'],
['w','x','y','z'],['x','y','z'],['w','x','q','z','y'],
['r','s','t'],['u','s','t']]
toy_lsh = lsh_sims.LSHSimilarities(toy_data, mh_params=params)
toy_sims = toy_lsh.batch_compute_similarities(distance_metric='jaccard', threshold=.2)
# Compute a hierarchical clustering from the similarity list
toy_h_clustering = hcluster.HCluster(toy_data)
toy_h_clustering.set_sim_method(toy_lsh.jaccard_sim)
toy_h_tree, toy_root = toy_h_clustering.sims_to_hcluster(toy_sims)
# Plot the hierarchical tree
toy_h_clustering.plot_htree(toy_h_tree, prog='dot')
# Now we'll do the same thing for our syslog data
# Compute a hierarchical clustering from the similarity list
h_clustering = hcluster.HCluster(dataframe['features'])
h_clustering.set_sim_method(lsh.jaccard_sim)
h_tree, root = h_clustering.sims_to_hcluster(sims, agg_sim=.3)
# Here's an image for those folks not looking at it interactively
# Even better save the graph out to json format and visualize it with D3 (D3js.org)
import networkx.readwrite.json_graph
import json
graph_json = networkx.readwrite.json_graph.tree_data(h_tree, root=root)
json.dump(graph_json, open('h_tree.json','w'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lets do standard histogram and event volume over time plots
Step2: <img align="right" src="files/images/spice_weasel.jpg" width="300px" style="margin
Step3: The LSH Sims python class has two distance metrics
Step4: <div style="float
Step5: <img src="files/images/h_cluster.png">
|
13,312
|
<ASSISTANT_TASK:>
Python Code:
import rebound
import numpy as np
sim = rebound.Simulation()
OMEGA = 0.00013143527 # [1/s]
sim.integrator_sei_OMEGA = OMEGA
surface_density = 400. # kg/m^2
particle_density = 400. # kg/m^3
sim.G = 6.67428e-11 # N m^2 / kg^2
sim.dt = 1e-3*2.*np.pi/OMEGA
sim.softening = 0.2 # [m]
boxsize = 200. # [m]
sim.configure_box(boxsize)
sim.configure_ghostboxes(2,2,0)
sim.integrator = "sei"
sim.boundary = "shear"
sim.gravity = "tree"
sim.collision = "tree"
def cor_bridges(r, v):
eps = 0.32*pow(abs(v)*100.,-0.234)
if eps>1.:
eps=1.
if eps<0.:
eps=0.
return eps
sim.coefficient_of_restitution = cor_bridges
def powerlaw(slope, min_v, max_v):
y = np.random.uniform()
pow_max = pow(max_v, slope+1.)
pow_min = pow(min_v, slope+1.)
return pow((pow_max-pow_min)*y + pow_min, 1./(slope+1.))
total_mass = 0.
while total_mass < surface_density*(boxsize**2):
radius = powerlaw(slope=-3, min_v=1, max_v=4) # [m]
mass = particle_density*4./3.*np.pi*(radius**3)
x = np.random.uniform(low=-boxsize/2., high=boxsize/2.)
sim.add(
m=mass,
r=radius,
x=x,
y=np.random.uniform(low=-boxsize/2., high=boxsize/2.),
z=np.random.normal(),
vx = 0.,
vy = -3./2.*x*OMEGA,
vz = 0.)
total_mass += mass
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.patches as patches
def plotParticles(sim):
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111,aspect='equal')
ax.set_ylabel("radial coordinate [m]")
ax.set_xlabel("azimuthal coordinate [m]")
ax.set_ylim(-boxsize/2.,boxsize/2.)
ax.set_xlim(-boxsize/2.,boxsize/2.)
for i, p in enumerate(sim.particles):
circ = patches.Circle((p.y, p.x), p.r, facecolor='darkgray', edgecolor='black')
ax.add_patch(circ)
plotParticles(sim)
sim.integrate(2.*np.pi/OMEGA)
plotParticles(sim)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next up, setting up several constants. We will be simulating a shearing sheet, a box with shear-periodic boundary conditions. This is a local approximation which makes the approximation that the epicyclic frequency $\Omega$ is the same for all particles.
Step2: Next, we need to let REBOUND know about $\Omega$. Within REBOUND $\Omega$ is used by the integrator SEI, the Symplectic Epicycle Integrator (see Rein and Tremaine 2012).
Step3: Finally, let us define the surface density of the ring and the particle density.
Step4: The gravitational constant in SI units is
Step5: We choose a timestep of 1/1000th of the orbital period.
Step6: We enable gravitational softening to smear out any potential numerical artefacts at very small scales.
Step7: Next up, we configure the simulation box. By default REBOUND used no boundary conditions, but here we have shear periodic boundaries and a finite simulation domain, so we need to let REBOUND know about the simulation boxsize (note that it is significantly smaller than $a$, so our local approximation is very good. In this example we'll work in SI units.
Step8: Because we have shear-periodic boundary conditions, we use ghost boxes to simulate the gravity of neighbouring ring patches. The more ghostboxes we use, the smoother the gravitational force accross the boundary. Here, two layers of ghost boxes in the x and y direction are enough (this is a total of 24 ghost boxes). We don't need ghost boxes in the z direction because a rings is a two dimensional system.
Step9: We can now setup which REBOUND modules we want to use for our simulation. Besides the SEI integrator and the shear-periodic boundary conditions mentioned above, we select the tree modules for both gravity and collisions. This speeds up the code from $O(N^2)$ to $O(N \log(N))$ for large numbers of particles $N$.
Step10: When two ring particles collide, they loose energy during their the bounce. We here use a velocity dependent Bridges et. al. coefficient of restitution. It is implemented as a python function (a C implementation would be faster!). We let REBOUND know which function we want to use by setting the coefficient_of_restitution function pointer in the simulation instance.
Step11: To initialize the particles, we will draw random numbers from a power law distribution.
Step12: Now we can finally add particles to REBOUND. Note that we initialize particles so that they have initially no velovity relative to the mean shear flow.
Step13: To see what is going on in our simulation, we create a function to plot the current positions of particles and call it once to visualise the initial conditions.
Step14: We now integrate for one orbital period $P=2\pi/\Omega$.
Step15: The integration takes a few seconds, then we can visualise the final particle positions.
|
13,313
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import math
import cvxpy
def get_markowitz_weights(mu, Sigma, gamma=1, max_position=1.0, max_leverage=1.0, short=False):
w = cvxpy.Variable(len(Sigma))
g = cvxpy.Parameter(sign='positive')
L = cvxpy.Parameter()
g.value = gamma
L.value = max_leverage
try:
ret = mu.T*w
except ValueError:
ret = mu*w
risk = cvxpy.quad_form(w, Sigma)
objective = cvxpy.Maximize(ret - g*risk)
constraints = [
cvxpy.abs(w) < max_position,
cvxpy.norm(w, 1) <= L, # Make it so we don't have to invest everything
]
if not short:
constraints.append(w >= 0) # Force all positive weights
prob = cvxpy.Problem(
objective,
constraints
)
result = prob.solve()
return w.value
universes = 1000
evens = 19
total = 38
payout = 100
rounds = 1
results = np.zeros(universes)
#Your code goes here
p = float(19)/total
for i in range(universes):
results[i] = payout * np.random.binomial(n = rounds, p = p)
print "Payout mean:", np.mean(results)
print "Payout std:", np.std(results)
universes = 1000
evens = 19
total = 38
payout = 1
rounds = 100
results = np.zeros(universes)
#Your code goes here
p = float(19)/total
for i in range(universes):
results[i] = payout * np.random.binomial(n = rounds, p = p)
print "Payout mean:", np.mean(results)
print "Payout std:", np.std(results)
time_start = '2015-01-01'
time_halfway = '2015-07-01'
time_end = '2016-01-01'
AMZN_r = get_pricing('AMZN', fields='price', start_date=time_start, end_date=time_end).pct_change()[1:]
X = np.linspace(0, len(AMZN_r), len(AMZN_r))
#Your code goes here
print "AMZN returns std:", np.std(AMZN_r.loc[time_halfway:])
AMZN_r.plot(alpha = 0.5);
plt.legend();
symbol_list = ['BEN', 'SYMC', 'IP', 'SWKS', 'IVZ', 'MJN', 'WMB', 'LB', 'TWX', 'NFX', 'PFE', 'LLY', 'HP', 'JPM', 'CXO', 'TJX', 'CAG', 'BBT', 'ATVI', 'NFLX']
prices_df = get_pricing(symbol_list, fields=['price']
, start_date=time_start, end_date=time_end)['price']
prices_df.columns = map(lambda x: x.symbol, prices_df.columns)
eweights_df = len(symbol_list) * [float(1)/len(symbol_list)]
returns_df = prices_df.pct_change(1)[1:]
#Your code goes here
returns_df['EWP'] = returns_df[symbol_list].dot(eweights_df)
print "AMZN returns std:", np.std(AMZN_r.loc[time_halfway:])
print "Portfolio returns std:", np.std(returns_df['EWP'].loc[time_halfway:])
AMZN_r.plot(alpha = 0.5);
returns_df['EWP'].loc[time_halfway:].plot();
plt.legend();
#Pipeline Setup
from quantopian.research import run_pipeline
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.factors import CustomFactor
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline.filters import QTradableStocksUS
from time import time
universe = QTradableStocksUS()
pipe = Pipeline(columns = {'Market Cap' : morningstar.valuation.market_cap.latest},
screen=universe
)
start_timer = time()
results = run_pipeline(pipe, time_start, time_end)
end_timer = time()
results.fillna(value=0);
print "Time to run pipeline %.2f secs" % (end_timer - start_timer)
# This is important as sometimes the first data returned won't be on the specified start date
first_trading_day = results.index.levels[0][1]
market_cap = results.loc[first_trading_day]['Market Cap']
market_cap.index = [x.symbol for x in market_cap.index]#pd.MultiIndex.from_tuples([(x[0], x[1].symbol) for x in market_cap.index])
mcs = market_cap # pd.DataFrame(market_cap.loc[(first_trading_day,)].loc[symbol_list]).transpose()
mweights = (mcs[symbol_list]/sum(mcs[symbol_list])).transpose()
#Your code goes here
returns_df['MWP'] = returns_df[symbol_list].dot(mweights)
print "AMZN returns std:", np.std(AMZN_r.loc[time_halfway:])
print "EWP returns std:", np.std(returns_df['EWP'].loc[time_halfway:])
print "MWP returns std:", np.std(returns_df['MWP'].loc[time_halfway:])
AMZN_r[time_halfway:].plot(alpha = 0.5);
returns_df['EWP'].loc[time_halfway:].plot(alpha = 0.5);
returns_df['MWP'].loc[time_halfway:].plot();
plt.legend();
mu = returns_df[symbol_list].\
loc[:time_halfway].fillna(0).mean().as_matrix()
sigma = returns_df[symbol_list].\
loc[:time_halfway].fillna(0).cov().as_matrix()
mkweights_df = get_markowitz_weights(mu, sigma)
#Your code goes here
returns_df['MKW'] = returns_df[symbol_list].dot(mkweights_df)
print "AMZN returns std:", np.std(AMZN_r.loc[time_halfway:])
print "EWP returns std:", np.std(returns_df['EWP'].loc[time_halfway:])
print "MWP returns std:", np.std(returns_df['MWP'].loc[time_halfway:])
print "MKW returns std:", np.std(returns_df['MKW'].loc[time_halfway:]), "\n"
AMZN_r.loc[time_halfway:].plot(alpha = 0.5);
returns_df['EWP'].loc[time_halfway:].plot(alpha = 0.5);
returns_df['MWP'].loc[time_halfway:].plot(alpha = 0.5);
returns_df['MKW'].loc[time_halfway:].plot();
plt.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helper Functions
Step2: Exercise 1
Step3: b. $1 Bets
Step4: Exercise 2
Step5: b. Equally Weighted Portfolio
Step6: c. Market Weighted Portfolio
Step7: d. Markowitz Portfolio
|
13,314
|
<ASSISTANT_TASK:>
Python Code:
# 检查你的Python版本
from sys import version_info
if version_info.major != 2 and version_info.minor != 7:
raise Exception('请使用Python 2.7来完成此项目')
# 引入这个项目需要的库
import numpy as np
import pandas as pd
import visuals as vs
from IPython.display import display # 使得我们可以对DataFrame使用display()函数
# 设置以内联的形式显示matplotlib绘制的图片(在notebook中显示更美观)
%matplotlib inline
# 载入整个客户数据集
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
# 显示数据集的一个描述
display(data.describe())
# TODO:从数据集中选择三个你希望抽样的数据点的索引
indices = [1,220,420]
# 为选择的样本建立一个DataFrame
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
from sklearn.model_selection import train_test_split
from sklearn import tree
# TODO:为DataFrame创建一个副本,用'drop'函数丢弃一些指定的特征
source_data = pd.read_csv("customers.csv")
target_data = pd.read_csv("customers.csv")
source_data.drop(['Delicatessen','Channel','Region'], axis = 1, inplace = True)
target = list(target_data['Milk'])
rng = np.random.RandomState(1)
# TODO:使用给定的特征作为目标,将数据分割成训练集和测试集
features_train, features_test, labels_train, labels_test = train_test_split(source_data, target, test_size=0.25, random_state=rng)
# TODO:创建一个DecisionTreeRegressor(决策树回归器)并在训练集上训练它
regressor = tree.DecisionTreeRegressor().fit(features_train,labels_train)
# TODO:输出在测试集上的预测得分
score = regressor.score(features_test,labels_test)
print("score:",score)
# 对于数据中的每一对特征构造一个散布矩阵
pd.plotting.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# TODO:使用自然对数缩放数据
log_data = np.log(data)
# TODO:使用自然对数缩放样本数据
log_samples = np.log(samples)
# 为每一对新产生的特征制作一个散射矩阵
pd.plotting.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# 展示经过对数变换后的样本数据
display(log_samples)
# 对于每一个特征,找到值异常高或者是异常低的数据点
for feature in log_data.keys():
# TODO:计算给定特征的Q1(数据的25th分位点)
Q1 = np.percentile(log_data[feature],25, axis=0)
# TODO:计算给定特征的Q3(数据的75th分位点)
Q3 = np.percentile(log_data[feature],75, axis=0)
# TODO:使用四分位范围计算异常阶(1.5倍的四分位距)
step = 1.5*(Q3-Q1)
# 显示异常点
print "Data points considered outliers for the feature '{}':".format(feature)
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# 可选:选择你希望移除的数据点的索引
outliers = [154]
# 如果选择了的话,移除异常点
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
from sklearn.decomposition import PCA
# TODO:通过在good data上使用PCA,将其转换成和当前特征数一样多的维度
pca = PCA(6, copy=True, whiten=False).fit(good_data)
# TODO:使用上面的PCA拟合将变换施加在log_samples上
pca_samples = pca.transform(log_samples)
# 生成PCA的结果图
pca_results = vs.pca_results(good_data, pca)
# 展示经过PCA转换的sample log-data
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
# TODO:通过在good data上进行PCA,将其转换成两个维度
pca = PCA(2, copy=True, whiten=False).fit(good_data)
# TODO:使用上面训练的PCA将good data进行转换
reduced_data = pca.transform(good_data)
# TODO:使用上面训练的PCA将log_samples进行转换
pca_samples = pca.transform(log_samples)
# 为降维后的数据创建一个DataFrame
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
# 展示经过两个维度的PCA转换之后的样本log-data
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
# TODO:在降维后的数据上使用你选择的聚类算法
clusterer = None
# TODO:预测每一个点的簇
preds = None
# TODO:找到聚类中心
centers = None
# TODO:预测在每一个转换后的样本点的类
sample_preds = None
# TODO:计算选择的类别的平均轮廓系数(mean silhouette coefficient)
score = None
# 从已有的实现中展示聚类的结果
vs.cluster_results(reduced_data, preds, centers, pca_samples)
# TODO:反向转换中心点
log_centers = None
# TODO:对中心点做指数转换
true_centers = None
# 显示真实的中心点
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
# 显示预测结果
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# 读取包含聚类结果的数据
cluster_data = pd.read_csv("cluster.csv")
y = cluster_data['Region']
X = cluster_data.drop(['Region'], axis = 1)
# 划分训练集测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=24)
clf = RandomForestClassifier(random_state=24)
clf.fit(X_train, y_train)
print "使用cluster特征的得分", clf.score(X_test, y_test)
# 移除cluster特征
X_train = X_train.copy()
X_train.drop(['cluster'], axis=1, inplace=True)
X_test = X_test.copy()
X_test.drop(['cluster'], axis=1, inplace=True)
clf.fit(X_train, y_train)
print "不使用cluster特征的得分", clf.score(X_test, y_test)
# 根据‘Channel‘数据显示聚类的结果
vs.channel_results(reduced_data, outliers, pca_samples)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 分析数据
Step2: 练习
Step3: 问题 1
Step4: 问题 2
Step5: 问题 3
Step6: 观察
Step7: 练习
Step8: 问题 4
Step9: 问题 5
Step10: 练习:降维
Step11: 观察
Step12: 可视化一个双标图(Biplot)
Step13: 观察
Step14: 问题 7
Step15: 练习
Step16: 问题 8
Step17: 回答
Step18: 回答:
|
13,315
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import PolynomialFeatures
# A seed just to ensure that the random numbers are the same for every run.
# Useful for eventual debugging.
np.random.seed(3155)
# Generate the data.
nsamples = 100
x = np.random.randn(nsamples)
y = 3*x**2 + np.random.randn(nsamples)
## Cross-validation on Ridge regression using KFold only
# Decide degree on polynomial to fit
poly = PolynomialFeatures(degree = 6)
# Decide which values of lambda to use
nlambdas = 500
lambdas = np.logspace(-3, 5, nlambdas)
# Initialize a KFold instance
k = 5
kfold = KFold(n_splits = k)
# Perform the cross-validation to estimate MSE
scores_KFold = np.zeros((nlambdas, k))
i = 0
for lmb in lambdas:
ridge = Ridge(alpha = lmb)
j = 0
for train_inds, test_inds in kfold.split(x):
xtrain = x[train_inds]
ytrain = y[train_inds]
xtest = x[test_inds]
ytest = y[test_inds]
Xtrain = poly.fit_transform(xtrain[:, np.newaxis])
ridge.fit(Xtrain, ytrain[:, np.newaxis])
Xtest = poly.fit_transform(xtest[:, np.newaxis])
ypred = ridge.predict(Xtest)
scores_KFold[i,j] = np.sum((ypred - ytest[:, np.newaxis])**2)/np.size(ypred)
j += 1
i += 1
estimated_mse_KFold = np.mean(scores_KFold, axis = 1)
## Cross-validation using cross_val_score from sklearn along with KFold
# kfold is an instance initialized above as:
# kfold = KFold(n_splits = k)
estimated_mse_sklearn = np.zeros(nlambdas)
i = 0
for lmb in lambdas:
ridge = Ridge(alpha = lmb)
X = poly.fit_transform(x[:, np.newaxis])
estimated_mse_folds = cross_val_score(ridge, X, y[:, np.newaxis], scoring='neg_mean_squared_error', cv=kfold)
# cross_val_score return an array containing the estimated negative mse for every fold.
# we have to the the mean of every array in order to get an estimate of the mse of the model
estimated_mse_sklearn[i] = np.mean(-estimated_mse_folds)
i += 1
## Plot and compare the slightly different ways to perform cross-validation
plt.figure()
plt.plot(np.log10(lambdas), estimated_mse_sklearn, label = 'cross_val_score')
plt.plot(np.log10(lambdas), estimated_mse_KFold, 'r--', label = 'KFold')
plt.xlabel('log10(lambda)')
plt.ylabel('mse')
plt.legend()
plt.show()
#Model training, we compute the mean value of y and X
y_train_mean = np.mean(y_train)
X_train_mean = np.mean(X_train,axis=0)
X_train = X_train - X_train_mean
y_train = y_train - y_train_mean
# The we fit our model with the training data
trained_model = some_model.fit(X_train,y_train)
#Model prediction, we need also to transform our data set used for the prediction.
X_test = X_test - X_train_mean #Use mean from training data
y_pred = trained_model(X_test)
y_pred = y_pred + y_train_mean
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
np.random.seed(2021)
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
def fit_beta(X, y):
return np.linalg.pinv(X.T @ X) @ X.T @ y
true_beta = [2, 0.5, 3.7]
x = np.linspace(0, 1, 11)
y = np.sum(
np.asarray([x ** p * b for p, b in enumerate(true_beta)]), axis=0
) + 0.1 * np.random.normal(size=len(x))
degree = 3
X = np.zeros((len(x), degree))
# Include the intercept in the design matrix
for p in range(degree):
X[:, p] = x ** p
beta = fit_beta(X, y)
# Intercept is included in the design matrix
skl = LinearRegression(fit_intercept=False).fit(X, y)
print(f"True beta: {true_beta}")
print(f"Fitted beta: {beta}")
print(f"Sklearn fitted beta: {skl.coef_}")
ypredictOwn = X @ beta
ypredictSKL = skl.predict(X)
print(f"MSE with intercept column")
print(MSE(y,ypredictOwn))
print(f"MSE with intercept column from SKL")
print(MSE(y,ypredictSKL))
plt.figure()
plt.scatter(x, y, label="Data")
plt.plot(x, X @ beta, label="Fit")
plt.plot(x, skl.predict(X), label="Sklearn (fit_intercept=False)")
# Do not include the intercept in the design matrix
X = np.zeros((len(x), degree - 1))
for p in range(degree - 1):
X[:, p] = x ** (p + 1)
# Intercept is not included in the design matrix
skl = LinearRegression(fit_intercept=True).fit(X, y)
# Use centered values for X and y when computing coefficients
y_offset = np.average(y, axis=0)
X_offset = np.average(X, axis=0)
beta = fit_beta(X - X_offset, y - y_offset)
intercept = np.mean(y_offset - X_offset @ beta)
print(f"Manual intercept: {intercept}")
print(f"Fitted beta (wiothout intercept): {beta}")
print(f"Sklearn intercept: {skl.intercept_}")
print(f"Sklearn fitted beta (without intercept): {skl.coef_}")
ypredictOwn = X @ beta
ypredictSKL = skl.predict(X)
print(f"MSE with Manual intercept")
print(MSE(y,ypredictOwn+intercept))
print(f"MSE with Sklearn intercept")
print(MSE(y,ypredictSKL))
plt.plot(x, X @ beta + intercept, "--", label="Fit (manual intercept)")
plt.plot(x, skl.predict(X), "--", label="Sklearn (fit_intercept=True)")
plt.grid()
plt.legend()
plt.show()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import linear_model
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
# A seed just to ensure that the random numbers are the same for every run.
# Useful for eventual debugging.
np.random.seed(3155)
n = 100
x = np.random.rand(n)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)
Maxpolydegree = 20
X = np.zeros((n,Maxpolydegree))
#We include explicitely the intercept column
for degree in range(Maxpolydegree):
X[:,degree] = x**degree
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
p = Maxpolydegree
I = np.eye(p,p)
# Decide which values of lambda to use
nlambdas = 6
MSEOwnRidgePredict = np.zeros(nlambdas)
MSERidgePredict = np.zeros(nlambdas)
lambdas = np.logspace(-4, 2, nlambdas)
for i in range(nlambdas):
lmb = lambdas[i]
OwnRidgeBeta = np.linalg.pinv(X_train.T @ X_train+lmb*I) @ X_train.T @ y_train
# Note: we include the intercept column and no scaling
RegRidge = linear_model.Ridge(lmb,fit_intercept=False)
RegRidge.fit(X_train,y_train)
# and then make the prediction
ytildeOwnRidge = X_train @ OwnRidgeBeta
ypredictOwnRidge = X_test @ OwnRidgeBeta
ytildeRidge = RegRidge.predict(X_train)
ypredictRidge = RegRidge.predict(X_test)
MSEOwnRidgePredict[i] = MSE(y_test,ypredictOwnRidge)
MSERidgePredict[i] = MSE(y_test,ypredictRidge)
print("Beta values for own Ridge implementation")
print(OwnRidgeBeta)
print("Beta values for Scikit-Learn Ridge implementation")
print(RegRidge.coef_)
print("MSE values for own Ridge implementation")
print(MSEOwnRidgePredict[i])
print("MSE values for Scikit-Learn Ridge implementation")
print(MSERidgePredict[i])
# Now plot the results
plt.figure()
plt.plot(np.log10(lambdas), MSEOwnRidgePredict, 'r', label = 'MSE own Ridge Test')
plt.plot(np.log10(lambdas), MSERidgePredict, 'g', label = 'MSE Ridge Test')
plt.xlabel('log10(lambda)')
plt.ylabel('MSE')
plt.legend()
plt.show()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn.preprocessing import StandardScaler
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
# A seed just to ensure that the random numbers are the same for every run.
# Useful for eventual debugging.
np.random.seed(315)
n = 100
x = np.random.rand(n)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)
Maxpolydegree = 20
X = np.zeros((n,Maxpolydegree-1))
for degree in range(1,Maxpolydegree): #No intercept column
X[:,degree-1] = x**(degree)
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
#For our own implementation, we will need to deal with the intercept by centering the design matrix and the target variable
X_train_mean = np.mean(X_train,axis=0)
#Center by removing mean from each feature
X_train_scaled = X_train - X_train_mean
X_test_scaled = X_test - X_train_mean
#The model intercept (called y_scaler) is given by the mean of the target variable (IF X is centered)
#Remove the intercept from the training data.
y_scaler = np.mean(y_train)
y_train_scaled = y_train - y_scaler
p = Maxpolydegree-1
I = np.eye(p,p)
# Decide which values of lambda to use
nlambdas = 6
MSEOwnRidgePredict = np.zeros(nlambdas)
MSERidgePredict = np.zeros(nlambdas)
lambdas = np.logspace(-4, 2, nlambdas)
for i in range(nlambdas):
lmb = lambdas[i]
OwnRidgeBeta = np.linalg.pinv(X_train_scaled.T @ X_train_scaled+lmb*I) @ X_train_scaled.T @ (y_train_scaled)
intercept_ = y_scaler - X_train_mean@OwnRidgeBeta #The intercept can be shifted so the model can predict on uncentered data
#Add intercept to prediction
ypredictOwnRidge = X_test_scaled @ OwnRidgeBeta + y_scaler
RegRidge = linear_model.Ridge(lmb)
RegRidge.fit(X_train,y_train)
ypredictRidge = RegRidge.predict(X_test)
MSEOwnRidgePredict[i] = MSE(y_test,ypredictOwnRidge)
MSERidgePredict[i] = MSE(y_test,ypredictRidge)
print("Beta values for own Ridge implementation")
print(OwnRidgeBeta) #Intercept is given by mean of target variable
print("Beta values for Scikit-Learn Ridge implementation")
print(RegRidge.coef_)
print('Intercept from own implementation:')
print(intercept_)
print('Intercept from Scikit-Learn Ridge implementation')
print(RegRidge.intercept_)
print("MSE values for own Ridge implementation")
print(MSEOwnRidgePredict[i])
print("MSE values for Scikit-Learn Ridge implementation")
print(MSERidgePredict[i])
# Now plot the results
plt.figure()
plt.plot(np.log10(lambdas), MSEOwnRidgePredict, 'b--', label = 'MSE own Ridge Test')
plt.plot(np.log10(lambdas), MSERidgePredict, 'g--', label = 'MSE SL Ridge Test')
plt.xlabel('log10(lambda)')
plt.ylabel('MSE')
plt.legend()
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
import scipy.linalg as scl
from sklearn.model_selection import train_test_split
import tqdm
sns.set(color_codes=True)
cmap_args=dict(vmin=-1., vmax=1., cmap='seismic')
L = 40
n = int(1e4)
spins = np.random.choice([-1, 1], size=(n, L))
J = 1.0
energies = np.zeros(n)
for i in range(n):
energies[i] = - J * np.dot(spins[i], np.roll(spins[i], 1))
X = np.zeros((n, L ** 2))
for i in range(n):
X[i] = np.outer(spins[i], spins[i]).ravel()
y = energies
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
X_train_own = np.concatenate(
(np.ones(len(X_train))[:, np.newaxis], X_train),
axis=1
)
X_test_own = np.concatenate(
(np.ones(len(X_test))[:, np.newaxis], X_test),
axis=1
)
def ols_inv(x: np.ndarray, y: np.ndarray) -> np.ndarray:
return scl.inv(x.T @ x) @ (x.T @ y)
beta = ols_inv(X_train_own, y_train)
def ols_svd(x: np.ndarray, y: np.ndarray) -> np.ndarray:
u, s, v = scl.svd(x)
return v.T @ scl.pinv(scl.diagsvd(s, u.shape[0], v.shape[0])) @ u.T @ y
beta = ols_svd(X_train_own,y_train)
J = beta[1:].reshape(L, L)
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J, **cmap_args)
plt.title("OLS", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
import scipy.linalg as scl
from sklearn.model_selection import train_test_split
import sklearn.linear_model as skl
import tqdm
sns.set(color_codes=True)
cmap_args=dict(vmin=-1., vmax=1., cmap='seismic')
L = 40
n = int(1e4)
spins = np.random.choice([-1, 1], size=(n, L))
J = 1.0
energies = np.zeros(n)
for i in range(n):
energies[i] = - J * np.dot(spins[i], np.roll(spins[i], 1))
X = np.zeros((n, L ** 2))
for i in range(n):
X[i] = np.outer(spins[i], spins[i]).ravel()
y = energies
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.96)
X_train_own = np.concatenate(
(np.ones(len(X_train))[:, np.newaxis], X_train),
axis=1
)
X_test_own = np.concatenate(
(np.ones(len(X_test))[:, np.newaxis], X_test),
axis=1
)
clf = skl.LinearRegression().fit(X_train, y_train)
J_sk = clf.coef_.reshape(L, L)
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_sk, **cmap_args)
plt.title("LinearRegression from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
_lambda = 0.1
clf_ridge = skl.Ridge(alpha=_lambda).fit(X_train, y_train)
J_ridge_sk = clf_ridge.coef_.reshape(L, L)
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_ridge_sk, **cmap_args)
plt.title("Ridge from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
clf_lasso = skl.Lasso(alpha=_lambda).fit(X_train, y_train)
J_lasso_sk = clf_lasso.coef_.reshape(L, L)
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_lasso_sk, **cmap_args)
plt.title("Lasso from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
lambdas = np.logspace(-4, 5, 10)
train_errors = {
"ols_sk": np.zeros(lambdas.size),
"ridge_sk": np.zeros(lambdas.size),
"lasso_sk": np.zeros(lambdas.size)
}
test_errors = {
"ols_sk": np.zeros(lambdas.size),
"ridge_sk": np.zeros(lambdas.size),
"lasso_sk": np.zeros(lambdas.size)
}
plot_counter = 1
fig = plt.figure(figsize=(32, 54))
for i, _lambda in enumerate(tqdm.tqdm(lambdas)):
for key, method in zip(
["ols_sk", "ridge_sk", "lasso_sk"],
[skl.LinearRegression(), skl.Ridge(alpha=_lambda), skl.Lasso(alpha=_lambda)]
):
method = method.fit(X_train, y_train)
train_errors[key][i] = method.score(X_train, y_train)
test_errors[key][i] = method.score(X_test, y_test)
omega = method.coef_.reshape(L, L)
plt.subplot(10, 5, plot_counter)
plt.imshow(omega, **cmap_args)
plt.title(r"%s, $\lambda = %.4f$" % (key, _lambda))
plot_counter += 1
plt.show()
fig = plt.figure(figsize=(20, 14))
colors = {
"ols_sk": "r",
"ridge_sk": "y",
"lasso_sk": "c"
}
for key in train_errors:
plt.semilogx(
lambdas,
train_errors[key],
colors[key],
label="Train {0}".format(key),
linewidth=4.0
)
for key in test_errors:
plt.semilogx(
lambdas,
test_errors[key],
colors[key] + "--",
label="Test {0}".format(key),
linewidth=4.0
)
plt.legend(loc="best", fontsize=18)
plt.xlabel(r"$\lambda$", fontsize=18)
plt.ylabel(r"$R^2$", fontsize=18)
plt.tick_params(labelsize=18)
plt.show()
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.model_selection import train_test_split
from sklearn.utils import resample
from sklearn.metrics import mean_squared_error
from IPython.display import display
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("chddata.csv"),'r')
# Read the chd data as csv file and organize the data into arrays with age group, age, and chd
chd = pd.read_csv(infile, names=('ID', 'Age', 'Agegroup', 'CHD'))
chd.columns = ['ID', 'Age', 'Agegroup', 'CHD']
output = chd['CHD']
age = chd['Age']
agegroup = chd['Agegroup']
numberID = chd['ID']
display(chd)
plt.scatter(age, output, marker='o')
plt.axis([18,70.0,-0.1, 1.2])
plt.xlabel(r'Age')
plt.ylabel(r'CHD')
plt.title(r'Age distribution and Coronary heart disease')
plt.show()
agegroupmean = np.array([0.1, 0.133, 0.250, 0.333, 0.462, 0.625, 0.765, 0.800])
group = np.array([1, 2, 3, 4, 5, 6, 7, 8])
plt.plot(group, agegroupmean, "r-")
plt.axis([0,9,0, 1.0])
plt.xlabel(r'Age group')
plt.ylabel(r'CHD mean values')
plt.title(r'Mean values for each age group')
plt.show()
The sigmoid function (or the logistic curve) is a
function that takes any real number, z, and outputs a number (0,1).
It is useful in neural networks for assigning weights on a relative scale.
The value z is the weighted sum of parameters involved in the learning algorithm.
import numpy
import matplotlib.pyplot as plt
import math as mt
z = numpy.arange(-5, 5, .1)
sigma_fn = numpy.vectorize(lambda z: 1/(1+numpy.exp(-z)))
sigma = sigma_fn(z)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(z, sigma)
ax.set_ylim([-0.1, 1.1])
ax.set_xlim([-5,5])
ax.grid(True)
ax.set_xlabel('z')
ax.set_title('sigmoid function')
plt.show()
Step Function
z = numpy.arange(-5, 5, .02)
step_fn = numpy.vectorize(lambda z: 1.0 if z >= 0.0 else 0.0)
step = step_fn(z)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(z, step)
ax.set_ylim([-0.5, 1.5])
ax.set_xlim([-5,5])
ax.grid(True)
ax.set_xlabel('z')
ax.set_title('step function')
plt.show()
tanh Function
z = numpy.arange(-2*mt.pi, 2*mt.pi, 0.1)
t = numpy.tanh(z)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(z, t)
ax.set_ylim([-1.0, 1.0])
ax.set_xlim([-2*mt.pi,2*mt.pi])
ax.grid(True)
ax.set_xlabel('z')
ax.set_title('tanh function')
plt.show()
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
# Load the data
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data,cancer.target,random_state=0)
print(X_train.shape)
print(X_test.shape)
# Logistic Regression
logreg = LogisticRegression(solver='lbfgs')
logreg.fit(X_train, y_train)
print("Test set accuracy with Logistic Regression: {:.2f}".format(logreg.score(X_test,y_test)))
#now scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Logistic Regression
logreg.fit(X_train_scaled, y_train)
print("Test set accuracy Logistic Regression with scaled data: {:.2f}".format(logreg.score(X_test_scaled,y_test)))
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
cancer = load_breast_cancer()
import pandas as pd
# Making a data frame
cancerpd = pd.DataFrame(cancer.data, columns=cancer.feature_names)
fig, axes = plt.subplots(15,2,figsize=(10,20))
malignant = cancer.data[cancer.target == 0]
benign = cancer.data[cancer.target == 1]
ax = axes.ravel()
for i in range(30):
_, bins = np.histogram(cancer.data[:,i], bins =50)
ax[i].hist(malignant[:,i], bins = bins, alpha = 0.5)
ax[i].hist(benign[:,i], bins = bins, alpha = 0.5)
ax[i].set_title(cancer.feature_names[i])
ax[i].set_yticks(())
ax[0].set_xlabel("Feature magnitude")
ax[0].set_ylabel("Frequency")
ax[0].legend(["Malignant", "Benign"], loc ="best")
fig.tight_layout()
plt.show()
import seaborn as sns
correlation_matrix = cancerpd.corr().round(1)
# use the heatmap function from seaborn to plot the correlation matrix
# annot = True to print the values inside the square
plt.figure(figsize=(15,8))
sns.heatmap(data=correlation_matrix, annot=True)
plt.show()
cancerpd = pd.DataFrame(cancer.data, columns=cancer.feature_names)
correlation_matrix = cancerpd.corr().round(1)
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
# Load the data
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data,cancer.target,random_state=0)
print(X_train.shape)
print(X_test.shape)
# Logistic Regression
logreg = LogisticRegression(solver='lbfgs')
logreg.fit(X_train, y_train)
print("Test set accuracy with Logistic Regression: {:.2f}".format(logreg.score(X_test,y_test)))
#now scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Logistic Regression
logreg.fit(X_train_scaled, y_train)
print("Test set accuracy Logistic Regression with scaled data: {:.2f}".format(logreg.score(X_test_scaled,y_test)))
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import cross_validate
#Cross validation
accuracy = cross_validate(logreg,X_test_scaled,y_test,cv=10)['test_score']
print(accuracy)
print("Test set accuracy with Logistic Regression and scaled data: {:.2f}".format(logreg.score(X_test_scaled,y_test)))
import scikitplot as skplt
y_pred = logreg.predict(X_test_scaled)
skplt.metrics.plot_confusion_matrix(y_test, y_pred, normalize=True)
plt.show()
y_probas = logreg.predict_proba(X_test_scaled)
skplt.metrics.plot_roc(y_test, y_probas)
plt.show()
skplt.metrics.plot_cumulative_gain(y_test, y_probas)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To think about, first part
Step2: What does centering (subtracting the mean values) mean mathematically?
Step3: The intercept is the value of our output/target variable
Step4: The results here agree when we force Scikit-Learn's Ridge function to include the first column in our design matrix.
Step5: We see here, when compared to the code which includes explicitely the
Step6: Here we use ordinary least squares
Step7: Linear regression
Step8: Singular Value decomposition
Step9: When extracting the $J$-matrix we need to make sure that we remove the intercept, as is done here
Step10: A way of looking at the coefficients in $J$ is to plot the matrices as images.
Step11: It is interesting to note that OLS
Step12: A more general form for the one-dimensional Ising model is
Step13: We will do all fitting with Scikit-Learn,
Step14: When extracting the $J$-matrix we make sure to remove the intercept
Step15: And then we plot the results
Step16: The results perfectly with our previous discussion where we used our own code.
Step17: LASSO regression
Step18: It is quite striking how LASSO breaks the symmetry of the coupling
Step19: We see that LASSO reaches a good solution for low
Step20: From the above figure we can see that LASSO with $\lambda = 10^{-2}$
Step21: Plotting the mean value for each group
Step25: We are now trying to find a function $f(y\vert x)$, that is a function which gives us an expected value for the output $y$ with a given input $x$.
Step26: Two parameters
Step27: Using the correlation matrix
Step28: Discussing the correlation data
Step29: and then
Step30: Diagonalizing this matrix we can in turn say something about which
|
13,316
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tensorflow-probability
# to generate gifs
!pip install imageio
!pip install git+https://github.com/tensorflow/docs
from IPython import display
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import PIL
import tensorflow as tf
import tensorflow_probability as tfp
import time
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
def preprocess_images(images):
images = images.reshape((images.shape[0], 28, 28, 1)) / 255.
return np.where(images > .5, 1.0, 0.0).astype('float32')
train_images = preprocess_images(train_images)
test_images = preprocess_images(test_images)
train_size = 60000
batch_size = 32
test_size = 10000
train_dataset = (tf.data.Dataset.from_tensor_slices(train_images)
.shuffle(train_size).batch(batch_size))
test_dataset = (tf.data.Dataset.from_tensor_slices(test_images)
.shuffle(test_size).batch(batch_size))
class CVAE(tf.keras.Model):
Convolutional variational autoencoder.
def __init__(self, latent_dim):
super(CVAE, self).__init__()
self.latent_dim = latent_dim
self.encoder = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latent_dim + latent_dim),
]
)
self.decoder = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(latent_dim,)),
tf.keras.layers.Dense(units=7*7*32, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(7, 7, 32)),
tf.keras.layers.Conv2DTranspose(
filters=64, kernel_size=3, strides=2, padding='same',
activation='relu'),
tf.keras.layers.Conv2DTranspose(
filters=32, kernel_size=3, strides=2, padding='same',
activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=1, padding='same'),
]
)
@tf.function
def sample(self, eps=None):
if eps is None:
eps = tf.random.normal(shape=(100, self.latent_dim))
return self.decode(eps, apply_sigmoid=True)
def encode(self, x):
mean, logvar = tf.split(self.encoder(x), num_or_size_splits=2, axis=1)
return mean, logvar
def reparameterize(self, mean, logvar):
eps = tf.random.normal(shape=mean.shape)
return eps * tf.exp(logvar * .5) + mean
def decode(self, z, apply_sigmoid=False):
logits = self.decoder(z)
if apply_sigmoid:
probs = tf.sigmoid(logits)
return probs
return logits
optimizer = tf.keras.optimizers.Adam(1e-4)
def log_normal_pdf(sample, mean, logvar, raxis=1):
log2pi = tf.math.log(2. * np.pi)
return tf.reduce_sum(
-.5 * ((sample - mean) ** 2. * tf.exp(-logvar) + logvar + log2pi),
axis=raxis)
def compute_loss(model, x):
mean, logvar = model.encode(x)
z = model.reparameterize(mean, logvar)
x_logit = model.decode(z)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)
logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3])
logpz = log_normal_pdf(z, 0., 0.)
logqz_x = log_normal_pdf(z, mean, logvar)
return -tf.reduce_mean(logpx_z + logpz - logqz_x)
@tf.function
def train_step(model, x, optimizer):
Executes one training step and returns the loss.
This function computes the loss and gradients, and uses the latter to
update the model's parameters.
with tf.GradientTape() as tape:
loss = compute_loss(model, x)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
epochs = 10
# set the dimensionality of the latent space to a plane for visualization later
latent_dim = 2
num_examples_to_generate = 16
# keeping the random vector constant for generation (prediction) so
# it will be easier to see the improvement.
random_vector_for_generation = tf.random.normal(
shape=[num_examples_to_generate, latent_dim])
model = CVAE(latent_dim)
def generate_and_save_images(model, epoch, test_sample):
mean, logvar = model.encode(test_sample)
z = model.reparameterize(mean, logvar)
predictions = model.sample(z)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i + 1)
plt.imshow(predictions[i, :, :, 0], cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
# Pick a sample of the test set for generating output images
assert batch_size >= num_examples_to_generate
for test_batch in test_dataset.take(1):
test_sample = test_batch[0:num_examples_to_generate, :, :, :]
generate_and_save_images(model, 0, test_sample)
for epoch in range(1, epochs + 1):
start_time = time.time()
for train_x in train_dataset:
train_step(model, train_x, optimizer)
end_time = time.time()
loss = tf.keras.metrics.Mean()
for test_x in test_dataset:
loss(compute_loss(model, test_x))
elbo = -loss.result()
display.clear_output(wait=False)
print('Epoch: {}, Test set ELBO: {}, time elapse for current epoch: {}'
.format(epoch, elbo, end_time - start_time))
generate_and_save_images(model, epoch, test_sample)
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
plt.imshow(display_image(epoch))
plt.axis('off') # Display images
anim_file = 'cvae.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import tensorflow_docs.vis.embed as embed
embed.embed_file(anim_file)
def plot_latent_images(model, n, digit_size=28):
Plots n x n digit images decoded from the latent space.
norm = tfp.distributions.Normal(0, 1)
grid_x = norm.quantile(np.linspace(0.05, 0.95, n))
grid_y = norm.quantile(np.linspace(0.05, 0.95, n))
image_width = digit_size*n
image_height = image_width
image = np.zeros((image_height, image_width))
for i, yi in enumerate(grid_x):
for j, xi in enumerate(grid_y):
z = np.array([[xi, yi]])
x_decoded = model.sample(z)
digit = tf.reshape(x_decoded[0], (digit_size, digit_size))
image[i * digit_size: (i + 1) * digit_size,
j * digit_size: (j + 1) * digit_size] = digit.numpy()
plt.figure(figsize=(10, 10))
plt.imshow(image, cmap='Greys_r')
plt.axis('Off')
plt.show()
plot_latent_images(model, 20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 컨볼루셔널 변이형 오토인코더
Step2: MNIST 데이터세트 로드하기
Step3: tf.data를 사용하여 데이터 배치 및 셔플 처리하기
Step5: tf.keras.Sequential을 사용하여 인코더 및 디코더 네트워크 정의하기
Step7: 손실 함수 및 옵티마이저 정의하기
Step8: 훈련하기
Step9: 마지막 훈련 epoch에서 생성된 이미지 표시하기
Step10: 저장된 모든 이미지의 애니메이션 GIF 표시하기
Step12: 잠재 공간에서 숫자의 2D 형태 표시하기
|
13,317
|
<ASSISTANT_TASK:>
Python Code:
%load_ext sql
%sql mysql://studentuser:studentpw@mysqlserver/dognitiondb
%sql USE dognitiondb
%config SqlMagic.displaylimit=25
%%sql
SELECT user_guid
FROM users
WHERE free_start_user=1
LIMIT 0,5;
%%sql
DESCRIBE dogs
%%sql
SELECT dog_guid
FROM dogs
WHERE dna_tested=1;
%%sql
DESCRIBE users
%%sql
SELECT user_guid
FROM users
WHERE membership_type=2;
%%sql
SELECT *
FROM users
WHERE state IN ('NC','NY');
%%sql
DESCRIBE complete_tests
%%sql
SELECT dog_guid, created_at
FROM complete_tests
WHERE created_at < '2015-10-15'
%%sql
SELECT user_guid
FROM users
WHERE state IS NOT NULL;
%%sql
DESCRIBE reviews
%%sql
SELECT dog_guid, subcategory_name, test_name
FROM reviews
WHERE YEAR(created_at)=2014
#WHERE created_at <= '2014-12-31' and created_at >= '2014-01-01'
LIMIT 0, 10;
%%sql
DESCRIBE dogs
%%sql
SELECT user_guid
FROM dogs
WHERE gender='female' and breed LIKE ('%terrier%')
%%sql
SELECT dog_guid, test_name, subcategory_name
FROM reviews
WHERE created_at >= '2014-10-01'
LIMIT 0, 100;
%%sql
SELECT user_guid
FROM users
WHERE state='CA' and last_active_at<'2015-01-10';
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Recall the general syntax structure we learned from the "Introduction to Query Syntax" video at the beginning of the week
Step2: Question 1
Step3: The SELECT statement can be used to interact with all data types, and there are many operators and functions that allow you to interact with the data in different ways. Here are some resources that describe these operators and functions
Step4: Now let's try using the WHERE statement to interact with text data (called "strings").
Step5: Next, let's try using the WHERE statement to interact with datetime data. Time-related data is a little more complicated to work with than other types of data, because it must have a very specific format. MySQL comes with the following data types for storing a date or a date/time value in the database
Step6: Last, let's use the WHERE statement in combination with two very important operators
Step7: Practice writing your own SELECT and WHERE statements!
Step8: Question 7
Step9: Question 8
Step10: There are many more operators you can use in your WHERE clauses to restrict the data you select as well. We do not have the space to go over each one individually in this lesson, but I encourage you to explore them on your own. <mark>This is a great area to practice being fearless and bold in your desire to learn new things! The more you try, the more you will learn.</mark>
|
13,318
|
<ASSISTANT_TASK:>
Python Code:
import re
import glob
import numpy
import iris
import iris.coord_categorisation
from iris.experimental.equalise_cubes import equalise_attributes
import warnings
warnings.filterwarnings('ignore')
lat_constraint = iris.Constraint(latitude=lambda cell: cell <= -30)
def read_hfds_data(file_list, lat_constraint):
Read in data for a given latitude constraint
cube = iris.load(file_list, 'surface_downward_heat_flux_in_sea_water' & lat_constraint)
equalise_attributes(cube)
iris.util.unify_time_units(cube)
cube = cube.concatenate_cube()
return cube
hfds_control_files = glob.glob('/g/data/ua6/DRSv2/CMIP5/GISS-E2-R/piControl/mon/ocean/r1i1p1/hfds/latest/hfds_Omon_GISS-E2-R_piControl_r1i1p1_*.nc')
hfds_control_cube = read_hfds_data(hfds_control_files, lat_constraint)
hfds_control_cube
hfds_historical_files = glob.glob('/g/data/ua6/DRSv2/CMIP5/GISS-E2-R/historical/mon/ocean/r1i1p1/hfds/latest/hfds_Omon_GISS-E2-R_historical_r1i1p1_*.nc')
hfds_historical_cube = read_hfds_data(hfds_historical_files, lat_constraint)
hfds_historical_cube
def broadcast_array(array, axis_index, shape):
Broadcast an array to a target shape.
Args:
array (numpy.ndarray)
axis_index (int or tuple): Postion in the target shape that the
axis/axes of the array corresponds to
e.g. if array corresponds to (depth, lat, lon) in (time, depth, lat, lon)
then axis_index = [1, 3]
e.g. if array corresponds to (lat) in (time, depth, lat, lon)
then axis_index = 2
shape (tuple): shape to broadcast to
For a one dimensional array, make start_axis_index = end_axis_index
if type(axis_index) in [float, int]:
start_axis_index = end_axis_index = axis_index
else:
assert len(axis_index) == 2
start_axis_index, end_axis_index = axis_index
dim = start_axis_index - 1
while dim >= 0:
array = array[numpy.newaxis, ...]
array = numpy.repeat(array, shape[dim], axis=0)
dim = dim - 1
dim = end_axis_index + 1
while dim < len(shape):
array = array[..., numpy.newaxis]
array = numpy.repeat(array, shape[dim], axis=-1)
dim = dim + 1
return array
def convert_to_joules(cube):
Convert units to Joules
assert 'W' in str(cube.units)
assert 'days' in str(cube.coord('time').units)
time_span_days = cube.coord('time').bounds[:, 1] - cube.coord('time').bounds[:, 0]
time_span_seconds = time_span_days * 60 * 60 * 24
cube.data = cube.data * broadcast_array(time_span_seconds, 0, cube.shape)
cube.units = str(cube.units).replace('W', 'J')
return cube
hfds_control_cube = convert_to_joules(hfds_control_cube)
hfds_control_cube
hfds_historical_cube = convert_to_joules(hfds_historical_cube)
hfds_historical_cube
def annual_sum(cube):
Calculate the annual sum.
iris.coord_categorisation.add_year(cube, 'time')
cube = cube.aggregated_by(['year'], iris.analysis.SUM)
cube.remove_coord('year')
return cube
hfds_control_cube = annual_sum(hfds_control_cube)
hfds_control_cube
hfds_historical_cube = annual_sum(hfds_historical_cube)
hfds_historical_cube
def multiply_by_area(cube, area_cube):
Multiply each cell of cube by its area.
area_data = broadcast_array(area_cube.data, [1, 2], cube.shape)
cube.data = cube.data * area_data
units = str(cube.units)
cube.units = units.replace('m-2', '')
return cube
area_file = '/g/data/ua6/DRSv2/CMIP5/GISS-E2-R/piControl/fx/ocean/r0i0p0/areacello/latest/areacello_fx_GISS-E2-R_piControl_r0i0p0.nc'
area_cube = iris.load_cube(area_file, 'cell_area' & lat_constraint)
area_cube
hfds_control_cube = multiply_by_area(hfds_control_cube, area_cube)
hfds_control_cube
hfds_historical_cube = multiply_by_area(hfds_historical_cube, area_cube)
hfds_historical_cube
hfds_control_cube = hfds_control_cube.collapsed(['latitude', 'longitude'], iris.analysis.SUM)
hfds_control_cube
hfds_historical_cube = hfds_historical_cube.collapsed(['latitude', 'longitude'], iris.analysis.SUM)
hfds_historical_cube
def get_time_constraint(time_list):
Get the time constraint used for subsetting an iris cube.
start_date, end_date = time_list
date_pattern = '([0-9]{4})-([0-9]{1,2})-([0-9]{1,2})'
assert re.search(date_pattern, start_date)
assert re.search(date_pattern, end_date)
start_year, start_month, start_day = start_date.split('-')
end_year, end_month, end_day = end_date.split('-')
time_constraint = iris.Constraint(time=lambda t: iris.time.PartialDateTime(year=int(start_year), month=int(start_month), day=int(start_day)) <= t.point <= iris.time.PartialDateTime(year=int(end_year), month=int(end_month), day=int(end_day)))
return time_constraint
def get_control_time_constraint(control_cube, hist_cube, time_bounds):
Define the time constraints for the control data.
iris.coord_categorisation.add_year(control_cube, 'time')
iris.coord_categorisation.add_year(hist_cube, 'time')
branch_time = hist_cube.attributes['branch_time']
index = 0
for bounds in control_cube.coord('time').bounds:
lower, upper = bounds
if lower <= branch_time < upper:
break
else:
index = index + 1
branch_year = control_cube.coord('year').points[index]
hist_start_year = hist_cube.coord('year').points[0]
start_gap = int(time_bounds[0].split('-')[0]) - hist_start_year
end_gap = int(time_bounds[1].split('-')[0]) - hist_start_year
control_start_year = branch_year + start_gap
control_end_year = branch_year + end_gap
control_start_date = str(control_start_year).zfill(4)+'-01-01'
control_end_date = str(control_end_year).zfill(4)+'-01-01'
time_constraint = get_time_constraint([control_start_date, control_end_date])
control_cube.remove_coord('year')
hist_cube.remove_coord('year')
return time_constraint
def temporal_sum(cube, time_constraint):
Calculate temporal sum over a given time period.
cube = cube.copy()
temporal_subset = cube.extract(time_constraint)
result = temporal_subset.collapsed('time', iris.analysis.SUM)
return float(result.data)
period_1870 = ['1861-01-01', '1880-12-31']
period_1995 = ['1986-01-01', '2005-12-31']
hist_1870_constraint = get_time_constraint(period_1870)
hist_1995_constraint = get_time_constraint(period_1995)
hist_1870 = temporal_sum(hfds_historical_cube, hist_1870_constraint)
hist_1995 = temporal_sum(hfds_historical_cube, hist_1995_constraint)
print('historial, 1870: ', hist_1870, 'J')
print('historial, 1995: ', hist_1995, 'J')
control_1870_constraint = get_control_time_constraint(hfds_control_cube, hfds_historical_cube, period_1870)
control_1995_constraint = get_control_time_constraint(hfds_control_cube, hfds_historical_cube, period_1995)
control_1870 = temporal_sum(hfds_control_cube, control_1870_constraint)
control_1995 = temporal_sum(hfds_control_cube, control_1995_constraint)
print('control, 1870: ', control_1870, 'J')
print('control, 1995: ', control_1995, 'J')
change = (hist_1995 - hist_1870) - (control_1995 - control_1870)
print('Cumulative oceanic heat uptake south of 30S between 1870 (1861-80) and 1995 (1986-2005):', change)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Step 1
Step5: Step 2
Step7: Step 3
Step9: Step 4
Step10: Step 5
Step14: Step 6
Step15: Final result
|
13,319
|
<ASSISTANT_TASK:>
Python Code:
# Učitaj osnovne biblioteke...
import numpy as np
import sklearn
import mlutils
import matplotlib.pyplot as plt
%pylab inline
from sklearn.linear_model import LinearRegression, RidgeClassifier
from sklearn.metrics import accuracy_score
seven_X = np.array([[2,1], [2,3], [1,2], [3,2], [5,2], [5,4], [6,3]])
seven_y = np.array([1, 1, 1, 1, 0, 0, 0])
# Vaš kôd ovdje
# Vaš kôd ovdje
outlier_X = np.append(seven_X, [[12,8]], axis=0)
outlier_y = np.append(seven_y, 0)
# Vaš kôd ovdje
unsep_X = np.append(seven_X, [[2,2]], axis=0)
unsep_y = np.append(seven_y, 0)
# Vaš kôd ovdje
from sklearn.datasets import make_classification
# Vaš kôd ovdje
# Vaš kôd ovdje
# Vaš kôd ovdje
from numpy import linalg
def lr_h(x, w):
# Vaš kôd ovdje
pass
def cross_entropy_error(X, y, w):
# Vaš kôd ovdje
pass
def lr_train(X, y, eta=0.01, max_iter=2000, trace=False, alpha=0, epsilon=0.000001):
# Vaš kôd ovdje
pass
# Vaš kôd ovdje
# Vaš kôd ovdje
from sklearn.linear_model import LogisticRegression
# Vaš kôd ovdje
# Vaš kôd ovdje
# Vaš kôd ovdje
# Vaš kôd ovdje
from numpy.linalg import norm
# Vaš kôd ovdje
from sklearn.preprocessing import PolynomialFeatures
#poly = PolynomialFeatures(2)
#...
#mlutils.plot_2d_clf_problem(X, y, lambda x : model.predict(poly.transform(x))
# Vaš kôd ovdje
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Zadatci
Step2: (a)
Step3: Kako bi se uvjerili da se u isprobanoj implementaciji ne radi o ničemu doli o običnoj linearnoj regresiji, napišite kôd koji dolazi do jednakog rješenja korištenjem isključivo razreda LinearRegression. Funkciju za predikciju, koju predajete kao treći argument h funkciji plot_2d_clf_problem, možete definirati lambda-izrazom
Step4: Q
Step5: Q
Step6: Q
Step7: Trenirajte tri binarna klasifikatora, $h_1$, $h_2$ i $h_3$ te prikažite granice između klasa (tri grafikona). Zatim definirajte $h(\mathbf{x})=\mathrm{argmax}_j h_j(\mathbf{x})$ (napišite svoju funkciju predict koja to radi) i prikažite granice između klasa za taj model. Zatim se uvjerite da biste identičan rezultat dobili izravno primjenom modela RidgeClassifier, budući da taj model za višeklasan problem zapravo interno implementira shemu jedan-naspram-ostali.
Step8: 3. Logistička regresija
Step9: Q
Step10: (c)
Step11: Q
Step12: Q
Step13: 4. Analiza logističke regresije
Step14: (b)
Step15: (c)
Step16: 5. Regularizirana logistička regresija
Step17: 6. Logistička regresija s funkcijom preslikavanja
|
13,320
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
# tuberculosis (TB) dataset
path_tb = '/Users/ericfourrier/Documents/ProjetR/tidy-data/data/tb.csv'
df_tb = pd.read_csv(path_tb)
df_tb.head(20)
# clean column names
df_tb = df_tb.rename(columns={'iso2':'country'}) # rename iso2 in country
df_tb = df_tb.drop(['new_sp'],axis = 1)
df_tb.columns = [c.replace('new_sp_','') for c in df_tb.columns] # remove new_sp_
df_tb.head()
df_tb_wide = pd.melt(df_tb,id_vars = ['country','year'])
df_tb_wide = df_tb_wide.rename(columns={'variable':'column','value':'cases'})
df_tb_wide
# create sex:
ages = {"04" : "0-4", "514" : "5-14", "014" : "0-14",
"1524" : "15-24","2534" : "25-34", "3544" : "35-44",
"4554" : "45-54", "5564" : "55-64", "65": "65+", "u" : np.nan}
# Create genre and age from the mixed type column
df_tb_wide['age']=df_tb_wide['column'].str[1:]
df_tb_wide['genre']=df_tb_wide['column'].str[0]
df_tb_wide = df_tb_wide.drop('column', axis=1)
# change category
df_tb_wide['age'] = df_tb_wide['age'].map(lambda x: ages[x])
# clean dataset
df_tb_wide
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Original TB dataset. Corresponding to each ‘m’ column for males, there is also an ‘f’ column
Step2: Create sex and age columns from variable 'column'
|
13,321
|
<ASSISTANT_TASK:>
Python Code:
import requests # to make GET request
from bs4 import BeautifulSoup # to parse the HTML response
import time # to pause between calls
import csv # to write data to csv
import pandas # to see CSV
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp')
# read the content of the server’s response as a string
page_source = response.text
print(page_source[:1000])
# parse the response into an HTML tree soup object
soup = BeautifulSoup(page_source, 'html5lib')
# take a look
print(soup.prettify()[:1000])
soup.find_all("a")
soup("a")
# get only the 'a' tags in 'sidemenu' class
soup("a", class_="sidemenu")
# get elements with "a.sidemenu" CSS Selector.
soup.select("a.sidemenu")
# your code here
# this is a list
soup.select("a.sidemenu")
# we first want to get an individual tag object
first_link = soup.select("a.sidemenu")[0]
# check out its class
print(type(first_link))
print(first_link.text)
print(first_link.text.strip())
print(first_link['href'])
# your code here
print(rel_paths)
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
page_source = response.text
# soup it
soup = BeautifulSoup(page_source, "html5lib")
# get all tr elements
rows = soup.find_all("tr")
print(len(rows))
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
print(rows[2].prettify())
# select only those 'td' tags with class 'detail'
row = rows[2]
detail_cells = row.select('td.detail')
detail_cells
# Keep only the text in each of those cells
row_data = [cell.text for cell in detail_cells]
print(row_data)
# check it out
print(row_data[0]) # name
print(row_data[3]) # district
print(row_data[4]) # party
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
page_source = response.text
# soup it
soup = BeautifulSoup(page_source, "html5lib")
# create empty list to store our data
members = []
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
# loop through all rows
for row in rows:
# select only those 'td' tags with class 'detail'
# get rid of junk rows
# keep only the text in each of those cells
# collect information
# store in a tuple
# append to list
print(len(members))
print()
print(members)
# your code here
members[:5]
# your code here
def get_bills(url):
# make the GET request
response = requests.get(url)
page_source = response.text
soup = BeautifulSoup(page_source, "html5lib")
# get the table rows
rows = soup.select('tr tr tr')
# make empty list to collect the info
bills = []
for row in rows:
# get columns
# get text in each column
# append data
return(bills)
# uncomment to test your code:
test_url = members[0][3]
print(test_url)
get_bills(test_url)[0:5]
bills_info = []
for member in members[:3]: # only go through 5 members
print(member[0])
member_bills = get_bills(member[3])
for b in member_bills:
bill = list(member) + list(b)
bills_info.append(bill)
time.sleep(5)
bills_info
# manually decide on header names
header = ['Senator', 'District', 'Party', 'Bills Link', 'Description', 'Chamber', 'Last Action', 'Last Action Date']
with open('all-bills.csv', 'w') as output_file:
csv_writer = csv.writer(output_file)
csv_writer.writerow(header)
csv_writer.writerows(bills_info)
pandas.read_csv('all-bills.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Using BeautifulSoup
Step2: 1.2 soup it
Step3: 1.3 Find Elements
Step4: NB
Step5: That's a lot! Many elements on a page will have the same HTML tag. For instance, if you search for everything with the a tag, you're likely to get a lot of stuff, much of which you don't want. What if we wanted to search for HTML tags ONLY with certain attributes, like particular CSS classes?
Step6: Oftentimes a more efficient way to search and find things on a website is by CSS selector. For this we have to use a different method, select(). Just pass a string into the .select() to get all elements with that string as a valid CSS selector.
Step7: Using CSS is one way to organize how we stylize a website. They allow us to categorize and label certain HTML elements, and use these categories and labels to apply specfic styling. CSS selectors are what we use to identify these elements, and then decide what style to apply. We won't have time today to go into detail about HTML and CSS, but it's worth talking about the three most important CSS selectors
Step8: 1.4 Get Attributes and Text of Elements
Step9: It's a tag! Which means it has a text member
Step10: You'll see there is some extra spacing here, we can use the strip method to remove that
Step11: Sometimes we want the value of certain attributes. This is particularly relevant for a tags, or links, where the href attribute tells us where the link goes.
Step12: Nice, but that doesn't look like a full URL! Don't worry, we'll get to this soon.
Step13: 2. Collecting information
Step14: 2.2 Find the right elements and text
Step15: But remember, find_all gets all the elements with the tr tag. We can use smart CSS selectors to get only the rows we want.
Step16: We can use the select method on anything. Let's say we want to find everything with the CSS selector td.detail in an item of the list we created above.
Step17: Most of the time, we're interested in the actual text of a website, not its tags. Remember, to get the text of an HTML element, use the text member.
Step18: Now we can combine the BeautifulSoup tools with our basic python skills to scrape an entire web page.
Step19: 2.3 Loop it all together
Step20: Challenge 5
Step21: Cool! Now you can probably guess how to loop it all together by iterating through the links we just extracted.
Step22: 3.2 Get all the bills
Step23: 4. Export to CSV
|
13,322
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# set_words_funtion_line = lambda x :set([word for word in x.split(' ')])
# set_source_words = set_words_funtion_line(source_text);
# set_target_words = set_words_funtion_line(target_text);
#
# print(source_text)
# print(source_vocab_to_int)
# print(target_vocab_to_int)
source_words_ids_prime = [[source_vocab_to_int.get(word, source_vocab_to_int['<UNK>']) for word in line.split()]\
for line in source_text.split('\n')];
target_words_ids = [];
# for line in source_text.split('\n'):
# alist = [];
# for word in line.split(' '):
# alist.append(source_vocab_to_int.get(word));
# alist.append(target_vocab_to_int['<EOS>']);
# source_words_ids.append(alist);
for line in target_text.split('\n'):
alist = [];
for word in line.split():
# if target_vocab_to_int.get(word) == None:
# print(word)
# alist.append(target_vocab_to_int['<UNK>']);
alist.append(target_vocab_to_int.get(word));
alist.append(target_vocab_to_int['<EOS>']);
target_words_ids.append(alist);
# print(target_vocab_to_int['<EOS>']) ;
# print(source_words_ids_prime);
# print (target_words_ids);
# print (target_text)
# print(source_words_ids_prime)
# print(target_words_ids)
return source_words_ids_prime, target_words_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
Input = tf.placeholder(tf.int32,[None,None],name = 'input')
target = tf.placeholder(tf.int32,[None,None],name='target')
learningRate = tf.placeholder(tf.float32);
keep_prob = tf.placeholder(tf.float32,name='keep_prob')
return Input, target, learningRate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
# I' am not so clear about the funtions below
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm,output_keep_prob = keep_prob)
encoder_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
_, encoder_state = tf.nn.dynamic_rnn(encoder_cell,rnn_inputs,dtype=tf.float32)
return encoder_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
decode_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
outputs, final_state, final_context_state = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,decode_train,dec_embed_input,sequence_length,scope=decoding_scope)
outputs = tf.nn.dropout(outputs,keep_prob)
train_logics = output_fn(outputs)
return train_logics
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
decode_infer = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length,vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decode_infer, scope=decoding_scope)
# inference_logits = tf.nn.dropout(inference_logits,keep_prob)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers);
with tf.variable_scope("decoding") as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
with tf.variable_scope("decoding") as decoding_scope:
train = decoding_layer_train(encoder_state,dec_cell,dec_embed_input,sequence_length,decoding_scope,output_fn,keep_prob)
with tf.variable_scope("decoding",reuse=True) as decoding_scope:
inference = decoding_layer_infer(encoder_state,dec_cell,dec_embeddings, target_vocab_to_int['<GO>'],target_vocab_to_int['<EOS>'] \
,sequence_length - 1,vocab_size,decoding_scope,output_fn,keep_prob)
# TODO: Implement Function
return train, inference
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
encoding_embed_input = tf.contrib.layers.embed_sequence(input_data,source_vocab_size,enc_embedding_size)
encoder_state = encoding_layer(encoding_embed_input,rnn_size,num_layers,keep_prob)
decoding_input = process_decoding_input(target_data,target_vocab_to_int,batch_size)
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, decoding_input)
train,inference = decoding_layer(dec_embed_input,dec_embeddings,encoder_state,target_vocab_size,\
sequence_length,rnn_size,num_layers,target_vocab_to_int,keep_prob)
# TODO: Implement Function
return train,inference
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 90
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 300
decoding_embedding_size = 300
# Learning Rate
learning_rate = 0.005
# Dropout Keep Probability
keep_probability = 0.6
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
sentence_lower = sentence.lower();
sentence_ids = [vocab_to_int.get(word,vocab_to_int['<UNK>']) for word in sentence_lower.split() ]
# TODO: Implement Function
return sentence_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
|
13,323
|
<ASSISTANT_TASK:>
Python Code:
import ctcsound
c = ctcsound.Csound()
ret = c.compileCsd("test1.csd")
if ret == ctcsound.CSOUND_SUCCESS:
c.start()
c.perform()
c.reset()
# Defining our Csound ORC code within a multiline String
orc =
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
aout vco2 0.5, 440
outs aout, aout
endin
# Defining our Csound SCO code
sco = "i1 0 1"
#c = ctcsound.Csound()
c.setOption("-odac") # Using SetOption() to configure Csound
# Note: use only one commandline flag at a time
c.compileOrc(orc) # Compile the Csound Orchestra string
c.readScore(sco) # Compile the Csound SCO String
c.start() # When compiling from strings, this call is necessary before doing any performing
c.perform() # Run Csound to completion
c.reset()
orc =
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
aout vco2 0.5, 440
outs aout, aout
endin
# Our Score for our project
sco = "i1 0 1"
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
c.readScore(sco) # Read in Score from String
c.start() # When compiling from strings, this call is necessary before doing any performing
# The following is our main performance loop. We will perform one block of sound at a time
# and continue to do so while it returns 0, which signifies to keep processing. We will
# explore this loop technique in further examples.
while not c.performKsmps():
print('.', end='')
print()
c.reset()
# Our Orchestra for our project
orc =
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
aout vco2 0.5, 440
outs aout, aout
endin
# Our Score for our project
sco = "i1 0 1"
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
c.readScore(sco) # Read in Score from String
c.start() # When compiling from strings, this call is necessary before doing any performing
t = ctcsound.CsoundPerformanceThread(c.csound()) # Create a new CsoundPerformanceThread, passing in the Csound object
t.play() # starts the thread, which is now running separately from the main thread. This
# call is asynchronous and will immediately return back here to continue code
# execution.
t.join() # Join will wait for the other thread to complete. If we did not call join(),
# after t.play() returns we would immediate move to the next line, c.stop().
# That would stop Csound without really giving it time to run.
c.reset()
orc =
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
ipch = cps2pch(p5, 12)
kenv linsegr 0, .05, 1, .05, .7, .4, 0
aout vco2 p4 * kenv, ipch
aout moogladder aout, 2000, 0.25
outs aout, aout
endin
sco = "i1 0 1 0.5 8.00"
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
c.readScore(sco) # Read in Score from pre-written String
c.start() # When compiling from strings, this call is necessary before doing any performing
# The following is our main performanceu loop. We will perform one block of sound at a time
# and continue to do so while it returns 0, which signifies to keep processing.
while (c.performKsmps() == 0):
pass
c.reset()
sco2 = ""
for i in range(13):
sco2 += "i1 %g .25 0.5 8.%02g\n"%(i * .25,i)
print(sco2)
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
c.readScore(sco2) # Read in Score from pre-written String
c.start() # When compiling from strings, this call is necessary before doing any performing
# The following is our main performanceu loop. We will perform one block of sound at a time
# and continue to do so while it returns 0, which signifies to keep processing.
while (c.performKsmps() == 0):
pass
c.reset()
from random import randint
vals = [] #initialize a list to hold lists of values
for i in range(13): #populate that list
vals.append([1, i * .25, .25, 0.5, "8.%02g"%(randint(0,15))])
# convert list of lists into a list of strings
vals = ["i" + " ".join(map(str,a)) for a in vals]
# now convert that list of strings into a single string
sco3 = "\n".join(vals)
print('Here is the list of lists that was converted into a list of strings:')
print()
print(vals)
print()
print('Here is the list of score events that was generated into a single string:')
print()
print(sco3)
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
c.readScore(sco3) # Read in Score from pre-written String
c.start() # When compiling from strings, this call is necessary before doing any performing
# The following is our main performanceu loop. We will perform one block of sound at a time
# and continue to do so while it returns 0, which signifies to keep processing.
while (c.performKsmps() == 0):
pass
c.reset()
from random import randint
def midi2pch(num):
"Convert MIDI Note Numbers to Csound PCH format"
return "%d.%02g" % (3 + (num / 12), num % 12)
class Note(object):
def __init__(self, *args):
self.pfields = list(args)
def __str__(self):
retVal = "i"
for i in range(len(self.pfields)):
if(i == 4):
retVal += " " + midi2pch(self.pfields[i])
else:
retVal += " " + str(self.pfields[i])
return retVal
# Our Orchestra for our project
orc =
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
ipch = cps2pch(p5, 12)
kenv linsegr 0, .05, 1, .05, .7, .4, 0
aout vco2 p4 * kenv, ipch
aout moogladder aout, 2000, 0.25
outs aout, aout
endin
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
notes = [] #initialize a list to hold lists of values
for i in range(13): #populate that list
notes.append( Note(1, i * .25, .25, 0.5, randint(60,75)) )
# now convert list of Note objects to string
sco = ""
for n in notes:
sco += "%s\n"%n # this implicitly calls the __str__ method on the Note Class
# generate notes again transposed a Major 3rd up
for n in notes:
n.pfields[4] += 4
n.pfields[1] += .125
sco += "%s\n"%n
print('Here is the list of score events that was generated:')
print()
print(sco)
c.readScore(sco) # Read in Score generated from notes
c.start() # When compiling from strings, this call is necessary before doing any performing
# The following is our main performance loop. We will perform one block of sound at a time
# and continue to do so while it returns 0, which signifies to keep processing.
while (c.performKsmps() == 0):
pass
c.reset()
from random import randint, random
class RandomLine(object):
def __init__(self, base, range):
self.curVal = 0.0
self.reset()
self.base = base
self.range = range
def reset(self):
self.dur = randint(256,512)
self.end = random()
self.increment = (self.end - self.curVal) / self.dur
def getValue(self):
self.dur -= 1
if(self.dur < 0):
self.reset()
retVal = self.curVal
self.curVal += self.increment
return self.base + (self.range * retVal)
# Our Orchestra for our project
orc =
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
kamp chnget "amp"
kfreq chnget "freq"
printk 0.5, kamp
printk 0.5, kfreq
aout vco2 kamp, kfreq
aout moogladder aout, 2000, 0.25
outs aout, aout
endin
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.setOption("-m7") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
sco = "i1 0 60\n"
c.readScore(sco) # Read in Score generated from notes
c.start() # When compiling from strings, this call is necessary before doing any performing
# The following is our main performance loop. We will perform one block of sound at a time
# and continue to do so while it returns 0, which signifies to keep processing.
amp = RandomLine(.6, .2) # create RandomLine for use with Amplitude
freq = RandomLine(400, 80) # create RandomLine for use with Frequency
c.setControlChannel("amp", amp.getValue()) # Initialize channel value before running Csound
c.setControlChannel("freq", freq.getValue()) # Initialize channel value before running Csound
print('Initial amp value is: ' + str(amp.getValue()))
print('Initial freq value is: ' + str(freq.getValue()))
while (c.performKsmps() == 0):
c.setControlChannel("amp", amp.getValue()) # update channel value
c.setControlChannel("freq", freq.getValue()) # update channel value
c.reset()
from random import randint, random
class RandomLine(object):
def __init__(self, base, range):
self.curVal = 0.0
self.reset()
self.base = base
self.range = range
def reset(self):
self.dur = randint(256,512)
self.end = random()
self.slope = (self.end - self.curVal) / self.dur
def getValue(self):
self.dur -= 1
if(self.dur < 0):
self.reset()
retVal = self.curVal
self.curVal += self.slope
return self.base + (self.range * retVal)
# Our Orchestra for our project
orc =
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
kamp chnget "amp"
kfreq chnget "freq"
printk 0.5, kamp
printk 0.5, kfreq
aout vco2 kamp, kfreq
aout moogladder aout, 2000, 0.25
outs aout, aout
endin
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.setOption("-m7") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
sco = "i1 0 60\n"
c.readScore(sco) # Read in Score generated from notes
c.start() # When compiling from strings, this call is necessary before doing any performing
# The following calls return a tuple. The first value of the tuple is a numpy array
# encapsulating the Channel Pointer retrieved from Csound and the second
# value is an error message, if an error happened (here it is discarded with _).
ampChannel, _ = c.channelPtr("amp",
ctcsound.CSOUND_CONTROL_CHANNEL | ctcsound.CSOUND_INPUT_CHANNEL)
freqChannel, _ = c.channelPtr("freq",
ctcsound.CSOUND_CONTROL_CHANNEL | ctcsound.CSOUND_INPUT_CHANNEL)
amp = RandomLine(.4, .2)
freq = RandomLine(400, 80)
ampChannel[0] = amp.getValue() # note we are now setting values in the ndarrays
freqChannel[0] = freq.getValue()
print('Initial amp value is: ' + str(amp.getValue()))
print('Initial freq value is: ' + str(freq.getValue()))
while (c.performKsmps() == 0):
ampChannel[0] = amp.getValue()
freqChannel[0] = freq.getValue()
c.reset()
from random import randint, random
class RandomLine(object):
def __init__(self, base, range):
self.curVal = 0.0
self.reset()
self.base = base
self.range = range
def reset(self):
self.dur = randint(256,512)
self.end = random()
self.slope = (self.end - self.curVal) / self.dur
def getValue(self):
self.dur -= 1
if(self.dur < 0):
self.reset()
retVal = self.curVal
self.curVal += self.slope
return self.base + (self.range * retVal)
# The following call return a tuple. The first value of the tuple is a numpy array
# encapsulating the Channel Pointer retrieved from Csound and the second
# value is an error message, if an error happened (here it is discarded with _).
def createChannel(channelName):
chn, _ = c.channelPtr(channelName,
ctcsound.CSOUND_CONTROL_CHANNEL | ctcsound.CSOUND_INPUT_CHANNEL)
return chn
###############################
# Our Orchestra for our project
orc =
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
kamp chnget "amp"
kfreq chnget "freq"
printk 0.5, kamp
printk 0.5, kfreq
aout vco2 kamp, kfreq
aout moogladder aout, 2000, 0.25
outs aout, aout
endin
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.setOption("-m7") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
sco = "i1 0 60\n"
c.readScore(sco) # Read in Score generated from notes
c.start() # When compiling from strings, this call is necessary before doing any performing
ampChannel = createChannel("amp") # uses utility method to create a channel and get numpy array to write to
freqChannel = createChannel("freq")
amp = RandomLine(.4, .2)
freq = RandomLine(400, 80)
ampChannel[0] = amp.getValue()
freqChannel[0] = freq.getValue()
while (c.performKsmps() == 0):
ampChannel[0] = amp.getValue()
freqChannel[0] = freq.getValue()
c.reset()
from random import randint, random
class RandomLine(object):
def __init__(self, base, range):
self.curVal = 0.0
self.reset()
self.base = base
self.range = range
def reset(self):
self.dur = randint(256,512)
self.end = random()
self.slope = (self.end - self.curVal) / self.dur
def getValue(self):
self.dur -= 1
if(self.dur < 0):
self.reset()
retVal = self.curVal
self.curVal += self.slope
return self.base + (self.range * retVal)
def createChannel(channelName):
chn, _ = c.channelPtr(channelName,
ctcsound.CSOUND_CONTROL_CHANNEL | ctcsound.CSOUND_INPUT_CHANNEL)
return chn
class ChannelUpdater(object):
def __init__(self, channelName, updater):
self.updater = updater
self.channel = createChannel(channelName)
def update(self):
self.channel[0] = self.updater.getValue()
###############################
# Our Orchestra for our project
orc =
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
kamp chnget "amp"
kfreq chnget "freq"
kres chnget "resonance"
printk 0.5, kamp
printk 0.5, kfreq
printk 0.5, kres
aout vco2 kamp, kfreq
aout moogladder aout, 2000, kres
outs aout, aout
endin
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.setOption("-m7") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
sco = "i1 0 60\n"
c.readScore(sco) # Read in Score generated from notes
c.start() # When compiling from strings, this call is necessary before doing any performing
# Create a set of ChannelUpdaters
channels = [ChannelUpdater("amp", RandomLine(.4, .2)),
ChannelUpdater("freq", RandomLine(400, 80)),
ChannelUpdater("resonance", RandomLine(0.4, .3))]
# Initialize all Channel Values
for chn in channels:
chn.update()
while (c.performKsmps() == 0):
for chn in channels: # update all channel values
chn.update()
c.reset()
from tkinter import *
from random import randint, random
###############################
# Our Orchestra for our project
orc =
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
kenv linsegr 0, .05, 1, .05, .9, .8, 0
aout vco2 p4 * kenv, p5
aout moogladder aout, 2000, p6
outs aout, aout
endin
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.setOption("-m7") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
c.start() # When compiling from strings, this call is necessary before doing any performing
perfThread = ctcsound.CsoundPerformanceThread(c.csound())
perfThread.play()
class Application(Frame):
def __init__(self,master=None):
master.title("Csound API GUI Example")
self.items = []
self.notes = []
Frame.__init__(self,master)
self.pack()
self.createUI()
self.master.protocol("WM_DELETE_WINDOW", self.quit)
def createUI(self):
self.size = 600
self.canvas = Canvas(self,height=self.size,width=self.size,bg="darkgray")
self.canvas.pack()
# create button and setup the playNote() callback
self.button = Button(self.canvas, text='Play Note', command=self.playNote)
self.button.pack()
def playNote(self):
perfThread.inputMessage("i1 0 2 .5 400 .25")
def quit(self):
self.master.destroy()
perfThread.stop()
perfThread.join()
app = Application(Tk())
app.mainloop()
c.reset()
from tkinter import *
from random import randint, random
###############################
# Our Orchestra for our project
orc =
sr=44100
ksmps=32
nchnls=2
0dbfs=1
gkpch chnexport "freq", 1
instr 1
kpch port gkpch, 0.01, i(gkpch)
printk .5, gkpch
kenv linsegr 0, .05, 1, .05, .9, .8, 0
aout vco2 p4 * kenv, kpch
aout moogladder aout, 2000, .25
outs aout, aout
endin
#c = ctcsound.Csound() # create an instance of Csound
c.setOption("-odac") # Set option for Csound
c.setOption("-m7") # Set option for Csound
c.compileOrc(orc) # Compile Orchestra from String
c.start() # When compiling from strings, this call is necessary before doing any performing
perfThread = ctcsound.CsoundPerformanceThread(c.csound())
perfThread.play()
def createChannel(channelName):
chn, _ = c.channelPtr(channelName,
ctcsound.CSOUND_CONTROL_CHANNEL | ctcsound.CSOUND_INPUT_CHANNEL)
return chn
class SliderWrapper(object):
def __init__(self, csound, channelName, slider):
self.slider = slider
self.channel = createChannel(channelName)
def update(self):
self.channel[0] = self.slider.get()
class Application(Frame):
def __init__(self,master=None):
master.title("Csound API GUI Example")
self.items = []
self.notes = []
Frame.__init__(self,master)
self.pack()
self.createUI()
self.master.protocol("WM_DELETE_WINDOW", self.quit)
def createUI(self):
self.size = 600
self.canvas = Canvas(self,height=self.size,width=self.size)
self.canvas.pack()
self.button = Button(self.canvas, text='Play Note', command=self.playNote)
self.button.pack()
self.freqSlider = Scale(self.canvas,from_=80.0, to=600.0,command=self.setFreq,label="Freq")
self.freqSlider.pack()
self.freqUpdater = SliderWrapper(c, "freq", self.freqSlider)
def playNote(self):
perfThread.inputMessage("i1 0 2 .3")
def setFreq(self, val):
print(val)
self.freqUpdater.update()
def quit(self):
self.master.destroy()
perfThread.stop()
perfThread.join()
app = Application(Tk())
app.mainloop()
c.stop
del c
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 1 - Simple Compilation with Csound
Step3: Example 2 - Compilation with Csound without CSD
Step5: Example 3 - Using Our Own Performance Loop
Step7: Example 4 - Using Csound's Performance Thread
Step9: Example 5 - Generating Score
Step10: Static Score
Step11: Generating a Score String with a Loop
Step12: Generating a Score Using an Intermediate Data Structure
Step14: Example 6 - Further Example of Generating Score
Step16: Example 7 - Communicating Continuous Values with Csound's Channel System
Step18: Example 8 - More Efficient Channel Communications
Step20: Example 9 - More Flexible and Efficient Channel Communications
Step22: Example 10 - Even More Flexible and Efficient Channel Communications
Step24: Example 11 - Graphical User Interfaces
Step26: Example 12 - Graphical User Interfaces
|
13,324
|
<ASSISTANT_TASK:>
Python Code:
import openpnm as op
pn = op.network.Cubic([4, 4,])
geo = op.geometry.SpheresAndCylinders(network=pn, pores=pn.Ps, throats=pn.Ts)
air = op.phases.Air(network=pn)
phys = op.physics.Basic(network=pn, phase=air, geometry=geo)
alg = op.algorithms.ReactiveTransport(network=pn, phase=air)
print(alg.sets)
alg.sets.prefix = 'rxn'
print(alg.sets)
s = {"prefix": "rxn"}
alg = op.algorithms.ReactiveTransport(network=pn, phase=air, settings=s)
print(alg.sets)
class MySettings:
prefix = 'rxn'
# alg = op.algorithms.ReactiveTransport(network=pn, phase=air, settings=MySettings())
# print(alg.sets)
from traits.api import TraitError
try:
alg.sets.phase = 1
except TraitError as e:
print(e)
from openpnm.utils import SettingsData, SettingsAttr
from traits.api import Int, Str, Float, List, Set
class CustomSettings(SettingsData):
a = Int()
b = Float(4.4)
c = Set()
d = List(Str)
s = CustomSettings()
print(s)
s.a = 2
s.b = 5.5
print(s)
try:
s.a = 1.1
except TraitError as e:
print(e)
s.d.append('item')
try:
s.d.append(100)
except TraitError as e:
print(e)
try:
s.d = 5
except TraitError as e:
print(e)
class MySettings(SettingsData):
a = Int(1)
b = 2
mysets = MySettings()
print(mysets)
class MySettings:
a = 1
b = 2
mysets = MySettings()
print(mysets.a, mysets.b)
S = SettingsAttr(s)
print(S)
dir(S)
s_new = {'a': 5, 'e': 6}
S._update(s_new)
print(S)
try:
S.e = 5.5
except TraitError as e:
print(e)
S.d is S._settings.d
S.f = 'a string'
try:
S.f = 1.0
except TraitError as e:
print(e)
print(S)
class DocumentedSettingsData(SettingsData):
r
A class that holds the following settings.
Parameters
----------
name : str
The name of the object
id_num : int
The id number of the object
name = Str('foo')
id_num = Int(0)
d = DocumentedSettingsData()
print(d.__doc__)
D = SettingsAttr(d)
print(D.__doc__)
import docrep
docstr = docrep.DocstringProcessor()
# This docorator tells docrep to fetch the docstring from this class and make it available elsewhere:
@docstr.get_sections(base='DocumentSettingsData', sections=['Parameters'])
class DocumentedSettingsData(SettingsData):
r
A class that holds the following settings.
Parameters
----------
name : str
The name of the object
id_num : int
The id number of the object
name = Str('foo')
id_num = Int(0)
# This tells docrep to parse this docstring and insert text at the %
@docstr.dedent
class ChildSettingsData(DocumentedSettingsData):
r
A subclass of DocumentedSettingsData that holds some addtional settings
Parameters
----------
%(DocumentSettingsData.parameters)s
max_iter : int
The maximum number of iterations to do
max_iter = Int(10)
E = ChildSettingsData()
print(E.__doc__)
E.visible_traits()
S = SettingsAttr(E)
print(S.__doc__)
isinstance(alg.sets, SettingsAttr)
class SpecificSettings(SettingsData):
a = Int(4)
class SomeAlg:
def __init__(self, settings={}, **kwargs):
self.settings = SettingsAttr(SpecificSettings())
self.settings._update(settings)
alg = SomeAlg()
print(alg.settings)
s = {'name': 'bob', 'a': 3}
alg2 = SomeAlg(settings=s)
print(alg2.settings)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Normal Usage
Step2: We can see that many default settings are already present by printing the settings attribute
Step3: We can override these settings manually
Step4: We could also have updated these settings when creating the algorithm object by passing in a set of arguments. This can be in the form of a dictionary
Step5: Or as a 'dataclass' style, which is how things are done behind the scenes in OpenPNM as described in the section
Step6: One new feature on OpenPNM V3 is that the datatype of some settings is enforced. For instance the 'prefix' setting must be a str, otherwise an error is raised
Step7: OpenPNM uses the traits package to control this behavior, which will be explained in more detail in the next section.
Step8: Now we can print s to inspect the settings. We'll see some default values for things that were not initialized like a, while b is the specified value.
Step9: The traits package enforces the datatype of each of these attributes
Step10: Let's look at the attribute protection in action again
Step11: The traits package also enforces the type of values we can put into the list stored in d
Step12: The first one works because we specified a list of strings, while the second fails because it is attempting to write an integer.
Step13: Gotcha With the HasTraits Class
Step14: However, if you create a custom class from a basic python object it will work
Step15: The SettingsAttr Class
Step16: Importantly only the the user-created attributes show up, which can be test using the dir() command
Step17: SettingsAttr has as few additional features. You can add a new batch of settings after instantiation as follows
Step18: We can see the updated value of a, as well as the newly added e. Because e contained an integer (6), the datatype of e will be forced to remain an integer
Step19: Note that the _update method begins with an underscore. This prevents it from appearing in the autocomplete menu to ensure it stays clean.
Step20: Another aspect to keep in mind is that the _settings attribute is a HasTraits object. This means that all values added to the settings must have an enforced datatype. This is done on the fly, based on the type of value received. For instance, once you set an attribute to string for instance, its type is set
Step22: Adding Documentation to a SettingsData and SettingsAttr Class
Step23: Note that this docstring was written when we defined DocumentedSettingsData subclass and it attached to it, but we'll be interacting with the SettingsAttr class. When a SettingsAttr is created is adopts the docstring of the received settings object. This can be either a proper SettingsData/HasTraits class or a basic dataclass style object. The docstring can only be set on initialization though, so any new attributes that are created by adding values to the object (i.e. D.zz_top = 'awesome') will not be documented.
Step26: This machinery was designed with the idea of inheriting docstrings using the docrep package. The following illustrates not only how the SettingsData class can be subclassed to add new settings (e.g. from GenericTransport to ReactiveTransport), but also how to use the hightly under-rated docrep package to also inherit the docstrings
Step27: And we can also see that max_iter was added to the values of name and id_num on the parent class
Step28: Again, as mentioned above, this inherited docstring is adopted by the SettingsAttr
Step29: Attaching to an OpenPNM Object
Step30: OpenPNM declares SettingsData classes with each file where class is defined, then this is attached upon initialization. This is illustrated below
Step31: Or with some additional user-defined settings and overrides
|
13,325
|
<ASSISTANT_TASK:>
Python Code:
import requests
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.cluster import AgglomerativeClustering
from sklearn.metrics import (
adjusted_rand_score,
adjusted_mutual_info_score,
homogeneity_score,
completeness_score,
v_measure_score
)
csv_content = requests.get("http://archive.ics.uci.edu/ml/machine-learning-databases/00292/Wholesale%20customers%20data.csv").text
csv_list = csv_content.strip().split("\n")
row_matrix = [line.strip().split(",") for line in csv_list]
row_name = row_matrix[0]
dataset = pd.DataFrame(row_matrix[1:],columns=row_name,dtype="int64")
dataset[:10]
dataset.describe()
for i in row_name[2:]:
dataset[i] = np.log(dataset[i]+0.0000001)
dataset[:10]
train_set,validation_set = train_test_split(dataset)
agc = AgglomerativeClustering(n_clusters=3)
agc.fit(train_set[row_name[2:]])
adjusted_rand_score(train_set["Channel"],agc.labels_)
adjusted_mutual_info_score(train_set["Channel"],agc.labels_)
homogeneity_score(train_set["Channel"],agc.labels_)
completeness_score(train_set["Channel"],agc.labels_)
v_measure_score(train_set["Channel"],agc.labels_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 数据获取
Step2: 数据预处理
Step3: 数据集拆分
Step4: 训练模型
Step5: 模型评估
|
13,326
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
func = np.poly1d(np.array([1, 2, 3, 4]).astype(float))
func2 = func.deriv(m=2)
x = np.linspace(-10, 10, 30)
y = func(x)
y2 = func2(x)
plt.plot(x, y)
plt.plot(x, y2, 'r>')
plt.xlabel('x')
plt.ylabel('y(x)')
plt.show()
func1 = func.deriv(m=1)
y1 = func1(x)
plt.plot(x, y, 'ro', x, y1, 'g--')
plt.show()
plt.subplot(311)
plt.plot(x, y, 'r-')
plt.title("Polynomial")
plt.subplot(312)
plt.plot(x, y1, 'b^')
plt.title("First Derivative")
plt.subplot(313)
plt.plot(x, y2, 'go')
plt.title("Second Derivative")
plt.show()
# 将当前的日期减去1年作为起始日期
from matplotlib.dates import DateFormatter
from matplotlib.dates import DayLocator
from matplotlib.dates import MonthLocator
from matplotlib.finance import quotes_historical_yahoo_ochl
from matplotlib.finance import candlestick_ochl
import sys
from datetime import date
today = date.today()
start = (today.year - 1, today.month, today.day)
alldays = DayLocator()
months = MonthLocator()
month_formatter = DateFormatter("%b %Y")
# 从财经频道下载股价数据
symbol = 'BIDU' # 百度的股票代码
quotes = quotes_historical_yahoo_ochl(symbol, start, today)
# 创建figure对象,这是绘图组件的顶层容器
fig = plt.figure()
# 增加一个子图
ax = fig.add_subplot(111)
# x轴上的主定位器设置为月定位器,该定位器负责x轴上较粗的刻度
ax.xaxis.set_major_locator(months)
# x轴上的次定位器设置为日定位器,该定位器负责x轴上较细的刻度
ax.xaxis.set_minor_locator(alldays)
# x轴上的主格式化器设置为月格式化器,该格式化器负责x轴上较粗刻度的标签
ax.xaxis.set_major_formatter(month_formatter)
# 使用matplotlib.finance包的candlestick函数绘制k线图
candlestick_ochl(ax, quotes)
# 将x轴上的标签格式化为日期
fig.autofmt_xdate()
plt.title('Baidu, Inc. (BIDU)')
plt.show()
# 使用百度一年的股票价格数据来绘制分布直方图
# 将股价数据从python列表转化为numpy数组并提取收盘价数据
quotes = np.array(quotes)
cp = quotes.T[4]
#指定合理数量的柱形,绘制分布直方图
plt.hist(cp, np.sqrt(len(cp)))
plt.show()
# 下载百度的历史数据,从中提取出日期和成交量数据
dates = quotes.T[0]
volume = quotes.T[5]
fig = plt.figure()
ax = fig.add_subplot(111)
plt.semilogy(dates, volume)
ax.xaxis.set_major_locator(months)
ax.xaxis.set_minor_locator(alldays)
ax.xaxis.set_major_formatter(month_formatter)
fig.autofmt_xdate()
plt.show()
# 计算股票收益率和成交量的变化值
ret = np.diff(cp) / cp[:-1]
volchange = np.diff(volume) / volume[:-1]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(ret, volchange, c=ret*100, s=volchange*100, alpha=0.5)
#c is color,s is size
#alpha is between 0 (transparent) and 1 (opaque)
ax.set_title('Closing price and volume returns')
ax.grid(True)
plt.show()
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(dates, cp)
# 对收盘价下方的区域进行着色,依据低于或高于平均收盘价使用不同的颜色填充
# fill_between(x, y1, y2=0, where=None, **kwargs)
# filling the regions between y1 and y2 where where==True
plt.fill_between(dates, cp.min(), cp, where=cp>cp.mean(), facecolor="green", alpha=0.4)
plt.fill_between(dates, cp.min(), cp, where=cp<cp.mean(), facecolor="red", alpha=0.6)
ax.xaxis.set_major_locator(months)
ax.xaxis.set_minor_locator(alldays)
ax.xaxis.set_major_formatter(month_formatter)
plt.grid(True)
fig.autofmt_xdate()
plt.show()
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, 2, 3])
ax.plot([3, 2, 2])
ax.plot([2, 2, 1])
ax.legend(('line1','line2','line3'),loc='best')
fig = plt.figure()
ax = fig.add_subplot(111)
line, = ax.plot(dates, cp)
plt.fill_between(dates, cp.min(), cp, where=cp>cp.mean(), facecolor="green", alpha=0.4)
plt.fill_between(dates, cp.min(), cp, where=cp<cp.mean(), facecolor="red", alpha=0.6)
line.set_label("Closing Price")
leg = ax.legend(loc='best', fancybox=True)
leg.get_frame().set_alpha(0.5)
ax.xaxis.set_major_locator(months)
ax.xaxis.set_minor_locator(alldays)
ax.xaxis.set_major_formatter(month_formatter)
plt.grid(True)
fig.autofmt_xdate()
plt.show()
fig = plt.figure(1,figsize=(8,5))
ax = fig.add_subplot(111)
t = np.arange(0.0, 3.0, 0.01)
s = np.cos(2*np.pi*t)
line, = ax.plot(t, s, lw=3, color='purple')
ax.annotate(s='arrowstyle', xy=(0, 1), xycoords='data',
xytext=(0, 30), textcoords='offset points',
arrowprops=dict(arrowstyle="->")
)
ax.annotate('arc3', xy=(0.5, -1), xycoords='data',
xytext=(-30, -30), textcoords='offset points',
arrowprops=dict(arrowstyle="->",connectionstyle="arc3,rad=.2")
)
ax.annotate('arc', xy=(1., 1), xycoords='data',
xytext=(-40, 30), textcoords='offset points',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc,angleA=0,armA=30,rad=10"),
)
ax.annotate('arc', xy=(1.5, -1), xycoords='data',
xytext=(-40, -30), textcoords='offset points',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc,angleA=0,armA=20,angleB=-90,armB=15,rad=7"),
)
ax.annotate('angle', xy=(2., 1), xycoords='data',
xytext=(-50, 30), textcoords='offset points',
arrowprops=dict(arrowstyle="->",
connectionstyle="angle,angleA=0,angleB=90,rad=10"),
)
ax.annotate('angle3', xy=(2.5, -1), xycoords='data',
xytext=(-50, -30), textcoords='offset points',
arrowprops=dict(arrowstyle="->",
connectionstyle="angle3,angleA=0,angleB=-90"),
)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. 简单绘图
Step2: 其中,linspace函数常见x轴的数值,在-10和10之间产生30个均匀分布的值。
Step3: plot函数可以接受任意个数的参数,我们可以使用可选的格式字符串参数指定线条的颜色和风格,默认为'b-'即蓝色视线。你可以指定其他风格。
Step4: 1.3 分组绘制多幅图
Step5: 2. 绘制百度的全年股票价格
Step6: 创建定位器(locator),使用来自matplotlib.dates的对象在x轴定位月份和日期
Step7: 创建日期格式化器(date formatter),以格式化x轴的日期。该格式化器创建一个字符串,包含简写的月份和年份。
Step8: 3. 绘制直方图
Step9: 4. 对数坐标图
Step10: 5. 散点图
Step11: 6. 着色
Step12: 7. 图例和注释
Step13: 我们就上面的收盘价例子添加图例
Step14: 注释示例
|
13,327
|
<ASSISTANT_TASK:>
Python Code:
# Perform standard imports:
import spacy
nlp = spacy.load('en_core_web_sm')
doc1 = nlp(u"I am a runner running in a race because I love to run since I ran today")
for token in doc1:
print(token.text, '\t', token.pos_, '\t', token.lemma, '\t', token.lemma_)
def show_lemmas(text):
for token in text:
print(f'{token.text:{12}} {token.pos_:{6}} {token.lemma:<{22}} {token.lemma_}')
doc2 = nlp(u"I saw eighteen mice today!")
show_lemmas(doc2)
doc3 = nlp(u"I am meeting him tomorrow at the meeting.")
show_lemmas(doc3)
doc4 = nlp(u"That's an enormous automobile")
show_lemmas(doc4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font color=green>In the above sentence, running, run and ran all point to the same lemma run (...11841) to avoid duplication.</font>
Step2: Here we're using an f-string to format the printed text by setting minimum field widths and adding a left-align to the lemma hash value.
Step3: <font color=green>Notice that the lemma of saw is see, mice is the plural form of mouse, and yet eighteen is its own number, not an expanded form of eight.</font>
Step4: <font color=green>Here the lemma of meeting is determined by its Part of Speech tag.</font>
|
13,328
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
floodingReports = pd.Series([5, 6, 2, 9, 12])
floodingReports
floodingReports = pd.Series([5, 6, 2, 9, 12], index=['Cochise County', 'Pima County', 'Santa Cruz County', 'Maricopa County', 'Yuma County'])
floodingReports
floodingReports['Cochise County']
floodingReports[floodingReports > 6]
# Create a dictionary
fireReports_dict = {'Cochise County': 12, 'Pima County': 342, 'Santa Cruz County': 13, 'Maricopa County': 42, 'Yuma County' : 52}
# Convert the dictionary into a pd.Series, and view it
fireReports = pd.Series(fireReports_dict); fireReports
fireReports.index = ["Cochice", "Pima", "Santa Cruz", "Maricopa", "Yuma"]
fireReports
data = {'county': ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma'],
'year': [2012, 2012, 2013, 2014, 2014],
'reports': [4, 24, 31, 2, 3]}
df = pd.DataFrame(data)
df
dfColumnOrdered = pd.DataFrame(data, columns=['county', 'year', 'reports'])
dfColumnOrdered
dfColumnOrdered['newsCoverage'] = pd.Series([42.3, 92.1, 12.2, 39.3, 30.2])
dfColumnOrdered
del dfColumnOrdered['newsCoverage']
dfColumnOrdered
dfColumnOrdered.T
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Series 101
Step2: Note that the first column of numbers (0 to 4) are the index.
Step3: View the number of floodingReports in Cochise County
Step4: View the counties with more than 6 flooding reports
Step5: Create a pandas series from a dictionary
Step6: Change the index of a series to shorter names
Step7: DataFrame 101
Step8: Set the order of the columns using the columns attribute
Step9: Add a column
Step10: Delete a column
Step11: Transpose the dataframe
|
13,329
|
<ASSISTANT_TASK:>
Python Code:
from dolfin import *
from rbnics import *
@DEIM()
class Gaussian(EllipticCoerciveProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
EllipticCoerciveProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.u = TrialFunction(V)
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=subdomains)
self.f = ParametrizedExpression(
self, "exp(- 2 * pow(x[0] - mu[0], 2) - 2 * pow(x[1] - mu[1], 2))", mu=(0., 0.),
element=V.ufl_element())
# note that we cannot use self.mu in the initialization of self.f, because self.mu has not been initialized yet
# Return custom problem name
def name(self):
return "GaussianDEIM"
# Return the alpha_lower bound.
def get_stability_factor_lower_bound(self):
return 1.
# Return theta multiplicative terms of the affine expansion of the problem.
def compute_theta(self, term):
if term == "a":
return (1.,)
elif term == "f":
return (1.,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "a":
u = self.u
a0 = inner(grad(u), grad(v)) * dx
return (a0,)
elif term == "f":
f = self.f
f0 = f * v * dx
return (f0,)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1),
DirichletBC(self.V, Constant(0.0), self.boundaries, 2),
DirichletBC(self.V, Constant(0.0), self.boundaries, 3)]
return (bc0,)
elif term == "inner_product":
u = self.u
x0 = inner(grad(u), grad(v)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
mesh = Mesh("data/gaussian.xml")
subdomains = MeshFunction("size_t", mesh, "data/gaussian_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/gaussian_facet_region.xml")
V = FunctionSpace(mesh, "Lagrange", 1)
problem = Gaussian(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(-1.0, 1.0), (-1.0, 1.0)]
problem.set_mu_range(mu_range)
reduction_method = ReducedBasis(problem)
reduction_method.set_Nmax(20, DEIM=21)
reduction_method.set_tolerance(1e-4, DEIM=1e-8)
reduction_method.initialize_training_set(50, DEIM=60)
reduced_problem = reduction_method.offline()
online_mu = (0.3, -1.0)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem)
reduced_solution_11 = reduced_problem.solve(DEIM=11)
plot(reduced_solution_11, reduced_problem=reduced_problem)
reduced_solution_1 = reduced_problem.solve(DEIM=1)
plot(reduced_solution_1, reduced_problem=reduced_problem)
reduction_method.initialize_testing_set(50, DEIM=60)
reduction_method.error_analysis(filename="error_analysis")
reduction_method.error_analysis(
with_respect_to=exact_problem, filename="error_analysis__with_respect_to_exact")
reduction_method.error_analysis(
with_respect_to=exact_problem, DEIM=11, filename="error_analysis__with_respect_to_exact__DEIM_11")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3. Affine decomposition
Step2: 4. Main program
Step3: 4.2. Create Finite Element space (Lagrange P1)
Step4: 4.3. Allocate an object of the Gaussian class
Step5: 4.4. Prepare reduction with a reduced basis method
Step6: 4.5. Perform the offline phase
Step7: 4.6.1. Perform an online solve
Step8: 4.6.2. Perform an online solve with a lower number of DEIM terms
Step9: 4.6.3. Perform an online solve with an even lower number of DEIM terms
Step10: 4.7.1. Perform an error analysis
Step11: 4.7.2. Perform an error analysis with respect to the exact problem
Step12: 4.7.3. Perform an error analysis with respect to the exact problem, but employing a smaller number of DEIM terms
|
13,330
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
# Make the features (X) and output (y) with 200 samples,
X, y = make_blobs(n_samples = 200,
# two feature variables,
n_features = 2,
# three clusters,
centers = 3,
# with .5 cluster standard deviation,
cluster_std = 0.5,
# shuffled,
shuffle = True)
# Create a scatterplot of the first and second features
plt.scatter(X[:,0],
X[:,1])
# Show the scatterplot
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make Data
Step2: View Data
|
13,331
|
<ASSISTANT_TASK:>
Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 1.x
except Exception:
pass
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "tensorflow"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
import tensorflow as tf
reset_graph()
x = tf.Variable(3, name="x")
y = tf.Variable(4, name="y")
f = x*x*y + y + 2
f
sess = tf.Session()
sess.run(x.initializer)
sess.run(y.initializer)
result = sess.run(f)
print(result)
sess.close()
with tf.Session() as sess:
x.initializer.run()
y.initializer.run()
result = f.eval()
result
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
result = f.eval()
result
init = tf.global_variables_initializer()
sess = tf.InteractiveSession()
init.run()
result = f.eval()
print(result)
sess.close()
result
reset_graph()
x1 = tf.Variable(1)
x1.graph is tf.get_default_graph()
graph = tf.Graph()
with graph.as_default():
x2 = tf.Variable(2)
x2.graph is graph
x2.graph is tf.get_default_graph()
w = tf.constant(3)
x = w + 2
y = x + 5
z = x * 3
with tf.Session() as sess:
print(y.eval()) # 10
print(z.eval()) # 15
with tf.Session() as sess:
y_val, z_val = sess.run([y, z])
print(y_val) # 10
print(z_val) # 15
import numpy as np
from sklearn.datasets import fetch_california_housing
reset_graph()
housing = fetch_california_housing()
m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]
X = tf.constant(housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
XT = tf.transpose(X)
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), y)
with tf.Session() as sess:
theta_value = theta.eval()
theta_value
X = housing_data_plus_bias
y = housing.target.reshape(-1, 1)
theta_numpy = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
print(theta_numpy)
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing.data, housing.target.reshape(-1, 1))
print(np.r_[lin_reg.intercept_.reshape(-1, 1), lin_reg.coef_.T])
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_housing_data = scaler.fit_transform(housing.data)
scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data]
print(scaled_housing_data_plus_bias.mean(axis=0))
print(scaled_housing_data_plus_bias.mean(axis=1))
print(scaled_housing_data_plus_bias.mean())
print(scaled_housing_data_plus_bias.shape)
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
gradients = 2/m * tf.matmul(tf.transpose(X), error)
training_op = tf.assign(theta, theta - learning_rate * gradients)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
best_theta
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
gradients = tf.gradients(mse, [theta])[0]
training_op = tf.assign(theta, theta - learning_rate * gradients)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
def my_func(a, b):
z = 0
for i in range(100):
z = a * np.cos(z + i) + z * np.sin(b - i)
return z
my_func(0.2, 0.3)
reset_graph()
a = tf.Variable(0.2, name="a")
b = tf.Variable(0.3, name="b")
z = tf.constant(0.0, name="z0")
for i in range(100):
z = a * tf.cos(z + i) + z * tf.sin(b - i)
grads = tf.gradients(z, [a, b])
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(z.eval())
print(sess.run(grads))
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,
momentum=0.9)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
reset_graph()
A = tf.placeholder(tf.float32, shape=(None, 3))
B = A + 5
with tf.Session() as sess:
B_val_1 = B.eval(feed_dict={A: [[1, 2, 3]]})
B_val_2 = B.eval(feed_dict={A: [[4, 5, 6], [7, 8, 9]]})
print(B_val_1)
print(B_val_2)
n_epochs = 1000
learning_rate = 0.01
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
def fetch_batch(epoch, batch_index, batch_size):
np.random.seed(epoch * n_batches + batch_index) # not shown in the book
indices = np.random.randint(m, size=batch_size) # not shown
X_batch = scaled_housing_data_plus_bias[indices] # not shown
y_batch = housing.target.reshape(-1, 1)[indices] # not shown
return X_batch, y_batch
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
best_theta
reset_graph()
n_epochs = 1000 # not shown in the book
learning_rate = 0.01 # not shown
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X") # not shown
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y") # not shown
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions") # not shown
error = y_pred - y # not shown
mse = tf.reduce_mean(tf.square(error), name="mse") # not shown
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) # not shown
training_op = optimizer.minimize(mse) # not shown
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval()) # not shown
save_path = saver.save(sess, "/tmp/my_model.ckpt")
sess.run(training_op)
best_theta = theta.eval()
save_path = saver.save(sess, "/tmp/my_model_final.ckpt")
best_theta
with tf.Session() as sess:
saver.restore(sess, "/tmp/my_model_final.ckpt")
best_theta_restored = theta.eval() # not shown in the book
np.allclose(best_theta, best_theta_restored)
saver = tf.train.Saver({"weights": theta})
reset_graph()
# notice that we start with an empty graph.
saver = tf.train.import_meta_graph("/tmp/my_model_final.ckpt.meta") # this loads the graph structure
theta = tf.get_default_graph().get_tensor_by_name("theta:0") # not shown in the book
with tf.Session() as sess:
saver.restore(sess, "/tmp/my_model_final.ckpt") # this restores the graph's state
best_theta_restored = theta.eval() # not shown in the book
np.allclose(best_theta, best_theta_restored)
from datetime import datetime
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
logdir = "{}/run-{}/".format(root_logdir, now)
logdir
def make_log_subdir(run_id=None):
if run_id is None:
run_id = datetime.utcnow().strftime("%Y%m%d%H%M%S")
return "{}/run-{}/".format(root_logdir, run_id)
file_writer = tf.summary.FileWriter(logdir, graph=tf.get_default_graph())
os.listdir(root_logdir)
os.listdir(logdir)
file_writer.close()
%load_ext tensorboard
%tensorboard --logdir {root_logdir}
def save_graph(graph=None, run_id=None):
if graph is None:
graph = tf.get_default_graph()
logdir = make_log_subdir(run_id)
file_writer = tf.summary.FileWriter(logdir, graph=graph)
file_writer.close()
return logdir
save_graph()
%tensorboard --logdir {root_logdir}
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
logdir = make_log_subdir()
mse_summary = tf.summary.scalar('MSE', mse)
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess: # not shown in the book
sess.run(init) # not shown
for epoch in range(n_epochs): # not shown
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
if batch_index % 10 == 0:
summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch})
step = epoch * n_batches + batch_index
file_writer.add_summary(summary_str, step)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval() # not shown
file_writer.close()
best_theta
%tensorboard --logdir {root_logdir}
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
with tf.name_scope("loss") as scope:
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
mse_summary = tf.summary.scalar('MSE', mse)
logdir = make_log_subdir()
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
if batch_index % 10 == 0:
summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch})
step = epoch * n_batches + batch_index
file_writer.add_summary(summary_str, step)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
file_writer.flush()
file_writer.close()
print("Best theta:")
print(best_theta)
print(error.op.name)
print(mse.op.name)
reset_graph()
a1 = tf.Variable(0, name="a") # name == "a"
a2 = tf.Variable(0, name="a") # name == "a_1"
with tf.name_scope("param"): # name == "param"
a3 = tf.Variable(0, name="a") # name == "param/a"
with tf.name_scope("param"): # name == "param_1"
a4 = tf.Variable(0, name="a") # name == "param_1/a"
for node in (a1, a2, a3, a4):
print(node.op.name)
reset_graph()
n_features = 3
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
w1 = tf.Variable(tf.random_normal((n_features, 1)), name="weights1")
w2 = tf.Variable(tf.random_normal((n_features, 1)), name="weights2")
b1 = tf.Variable(0.0, name="bias1")
b2 = tf.Variable(0.0, name="bias2")
z1 = tf.add(tf.matmul(X, w1), b1, name="z1")
z2 = tf.add(tf.matmul(X, w2), b2, name="z2")
relu1 = tf.maximum(z1, 0., name="relu1")
relu2 = tf.maximum(z1, 0., name="relu2") # Oops, cut&paste error! Did you spot it?
output = tf.add(relu1, relu2, name="output")
reset_graph()
def relu(X):
w_shape = (int(X.get_shape()[1]), 1)
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
z = tf.add(tf.matmul(X, w), b, name="z")
return tf.maximum(z, 0., name="relu")
n_features = 3
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
save_graph(run_id="relu1")
%tensorboard --logdir {root_logdir}
reset_graph()
def relu(X):
with tf.name_scope("relu"):
w_shape = (int(X.get_shape()[1]), 1) # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, 0., name="max") # not shown
n_features = 3
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
save_graph(run_id="relu2")
%tensorboard --logdir {root_logdir}
reset_graph()
def relu(X, threshold):
with tf.name_scope("relu"):
w_shape = (int(X.get_shape()[1]), 1) # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, threshold, name="max")
threshold = tf.Variable(0.0, name="threshold")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X, threshold) for i in range(5)]
output = tf.add_n(relus, name="output")
reset_graph()
def relu(X):
with tf.name_scope("relu"):
if not hasattr(relu, "threshold"):
relu.threshold = tf.Variable(0.0, name="threshold")
w_shape = int(X.get_shape()[1]), 1 # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, relu.threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
reset_graph()
with tf.variable_scope("relu"):
threshold = tf.get_variable("threshold", shape=(),
initializer=tf.constant_initializer(0.0))
with tf.variable_scope("relu", reuse=True):
threshold = tf.get_variable("threshold")
with tf.variable_scope("relu") as scope:
scope.reuse_variables()
threshold = tf.get_variable("threshold")
reset_graph()
def relu(X):
with tf.variable_scope("relu", reuse=True):
threshold = tf.get_variable("threshold")
w_shape = int(X.get_shape()[1]), 1 # not shown
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
with tf.variable_scope("relu"):
threshold = tf.get_variable("threshold", shape=(),
initializer=tf.constant_initializer(0.0))
relus = [relu(X) for relu_index in range(5)]
output = tf.add_n(relus, name="output")
save_graph(run_id="relu6")
%tensorboard --logdir {root_logdir}
reset_graph()
def relu(X):
with tf.variable_scope("relu"):
threshold = tf.get_variable("threshold", shape=(), initializer=tf.constant_initializer(0.0))
w_shape = (int(X.get_shape()[1]), 1)
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
z = tf.add(tf.matmul(X, w), b, name="z")
return tf.maximum(z, threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
with tf.variable_scope("", default_name="") as scope:
first_relu = relu(X) # create the shared variable
scope.reuse_variables() # then reuse it
relus = [first_relu] + [relu(X) for i in range(4)]
output = tf.add_n(relus, name="output")
save_graph(run_id="relu8")
%tensorboard --logdir {root_logdir}
reset_graph()
def relu(X):
threshold = tf.get_variable("threshold", shape=(),
initializer=tf.constant_initializer(0.0))
w_shape = (int(X.get_shape()[1]), 1) # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = []
for relu_index in range(5):
with tf.variable_scope("relu", reuse=(relu_index >= 1)) as scope:
relus.append(relu(X))
output = tf.add_n(relus, name="output")
save_graph(run_id="relu9")
%tensorboard --logdir {root_logdir}
reset_graph()
with tf.variable_scope("my_scope"):
x0 = tf.get_variable("x", shape=(), initializer=tf.constant_initializer(0.))
x1 = tf.Variable(0., name="x")
x2 = tf.Variable(0., name="x")
with tf.variable_scope("my_scope", reuse=True):
x3 = tf.get_variable("x")
x4 = tf.Variable(0., name="x")
with tf.variable_scope("", default_name="", reuse=True):
x5 = tf.get_variable("my_scope/x")
print("x0:", x0.op.name)
print("x1:", x1.op.name)
print("x2:", x2.op.name)
print("x3:", x3.op.name)
print("x4:", x4.op.name)
print("x5:", x5.op.name)
print(x0 is x3 and x3 is x5)
reset_graph()
text = np.array("Do you want some café?".split())
text_tensor = tf.constant(text)
with tf.Session() as sess:
print(text_tensor.eval())
from sklearn.datasets import make_moons
m = 1000
X_moons, y_moons = make_moons(m, noise=0.1, random_state=42)
plt.plot(X_moons[y_moons == 1, 0], X_moons[y_moons == 1, 1], 'go', label="Positive")
plt.plot(X_moons[y_moons == 0, 0], X_moons[y_moons == 0, 1], 'r^', label="Negative")
plt.legend()
plt.show()
X_moons_with_bias = np.c_[np.ones((m, 1)), X_moons]
X_moons_with_bias[:5]
y_moons_column_vector = y_moons.reshape(-1, 1)
test_ratio = 0.2
test_size = int(m * test_ratio)
X_train = X_moons_with_bias[:-test_size]
X_test = X_moons_with_bias[-test_size:]
y_train = y_moons_column_vector[:-test_size]
y_test = y_moons_column_vector[-test_size:]
def random_batch(X_train, y_train, batch_size):
rnd_indices = np.random.randint(0, len(X_train), batch_size)
X_batch = X_train[rnd_indices]
y_batch = y_train[rnd_indices]
return X_batch, y_batch
X_batch, y_batch = random_batch(X_train, y_train, 5)
X_batch
y_batch
reset_graph()
n_inputs = 2
X = tf.placeholder(tf.float32, shape=(None, n_inputs + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n_inputs + 1, 1], -1.0, 1.0, seed=42), name="theta")
logits = tf.matmul(X, theta, name="logits")
y_proba = 1 / (1 + tf.exp(-logits))
y_proba = tf.sigmoid(logits)
epsilon = 1e-7 # to avoid an overflow when computing the log
loss = -tf.reduce_mean(y * tf.log(y_proba + epsilon) + (1 - y) * tf.log(1 - y_proba + epsilon))
loss = tf.losses.log_loss(y, y_proba) # uses epsilon = 1e-7 by default
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
n_epochs = 1000
batch_size = 50
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = random_batch(X_train, y_train, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
loss_val = loss.eval({X: X_test, y: y_test})
if epoch % 100 == 0:
print("Epoch:", epoch, "\tLoss:", loss_val)
y_proba_val = y_proba.eval(feed_dict={X: X_test, y: y_test})
y_proba_val[:5]
y_pred = (y_proba_val >= 0.5)
y_pred[:5]
from sklearn.metrics import precision_score, recall_score
precision_score(y_test, y_pred)
recall_score(y_test, y_pred)
y_pred_idx = y_pred.reshape(-1) # a 1D array rather than a column vector
plt.plot(X_test[y_pred_idx, 1], X_test[y_pred_idx, 2], 'go', label="Positive")
plt.plot(X_test[~y_pred_idx, 1], X_test[~y_pred_idx, 2], 'r^', label="Negative")
plt.legend()
plt.show()
X_train_enhanced = np.c_[X_train,
np.square(X_train[:, 1]),
np.square(X_train[:, 2]),
X_train[:, 1] ** 3,
X_train[:, 2] ** 3]
X_test_enhanced = np.c_[X_test,
np.square(X_test[:, 1]),
np.square(X_test[:, 2]),
X_test[:, 1] ** 3,
X_test[:, 2] ** 3]
X_train_enhanced[:5]
reset_graph()
def logistic_regression(X, y, initializer=None, seed=42, learning_rate=0.01):
n_inputs_including_bias = int(X.get_shape()[1])
with tf.name_scope("logistic_regression"):
with tf.name_scope("model"):
if initializer is None:
initializer = tf.random_uniform([n_inputs_including_bias, 1], -1.0, 1.0, seed=seed)
theta = tf.Variable(initializer, name="theta")
logits = tf.matmul(X, theta, name="logits")
y_proba = tf.sigmoid(logits)
with tf.name_scope("train"):
loss = tf.losses.log_loss(y, y_proba, scope="loss")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
loss_summary = tf.summary.scalar('log_loss', loss)
with tf.name_scope("init"):
init = tf.global_variables_initializer()
with tf.name_scope("save"):
saver = tf.train.Saver()
return y_proba, loss, training_op, loss_summary, init, saver
from datetime import datetime
def log_dir(prefix=""):
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
if prefix:
prefix += "-"
name = prefix + "run-" + now
return "{}/{}/".format(root_logdir, name)
n_inputs = 2 + 4
logdir = log_dir("logreg")
X = tf.placeholder(tf.float32, shape=(None, n_inputs + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
y_proba, loss, training_op, loss_summary, init, saver = logistic_regression(X, y)
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
n_epochs = 10001
batch_size = 50
n_batches = int(np.ceil(m / batch_size))
checkpoint_path = "/tmp/my_logreg_model.ckpt"
checkpoint_epoch_path = checkpoint_path + ".epoch"
final_model_path = "./my_logreg_model"
with tf.Session() as sess:
if os.path.isfile(checkpoint_epoch_path):
# if the checkpoint file exists, restore the model and load the epoch number
with open(checkpoint_epoch_path, "rb") as f:
start_epoch = int(f.read())
print("Training was interrupted. Continuing at epoch", start_epoch)
saver.restore(sess, checkpoint_path)
else:
start_epoch = 0
sess.run(init)
for epoch in range(start_epoch, n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = random_batch(X_train_enhanced, y_train, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
loss_val, summary_str = sess.run([loss, loss_summary], feed_dict={X: X_test_enhanced, y: y_test})
file_writer.add_summary(summary_str, epoch)
if epoch % 500 == 0:
print("Epoch:", epoch, "\tLoss:", loss_val)
saver.save(sess, checkpoint_path)
with open(checkpoint_epoch_path, "wb") as f:
f.write(b"%d" % (epoch + 1))
saver.save(sess, final_model_path)
y_proba_val = y_proba.eval(feed_dict={X: X_test_enhanced, y: y_test})
os.remove(checkpoint_epoch_path)
file_writer.close()
y_pred = (y_proba_val >= 0.5)
precision_score(y_test, y_pred)
recall_score(y_test, y_pred)
y_pred_idx = y_pred.reshape(-1) # a 1D array rather than a column vector
plt.plot(X_test[y_pred_idx, 1], X_test[y_pred_idx, 2], 'go', label="Positive")
plt.plot(X_test[~y_pred_idx, 1], X_test[~y_pred_idx, 2], 'r^', label="Negative")
plt.legend()
plt.show()
%tensorboard --logdir {root_logdir}
from scipy.stats import reciprocal
n_search_iterations = 10
for search_iteration in range(n_search_iterations):
batch_size = np.random.randint(1, 100)
learning_rate = reciprocal(0.0001, 0.1).rvs(random_state=search_iteration)
n_inputs = 2 + 4
logdir = log_dir("logreg")
print("Iteration", search_iteration)
print(" logdir:", logdir)
print(" batch size:", batch_size)
print(" learning_rate:", learning_rate)
print(" training: ", end="")
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
y_proba, loss, training_op, loss_summary, init, saver = logistic_regression(
X, y, learning_rate=learning_rate)
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
n_epochs = 10001
n_batches = int(np.ceil(m / batch_size))
final_model_path = "./my_logreg_model_%d" % search_iteration
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = random_batch(X_train_enhanced, y_train, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
loss_val, summary_str = sess.run([loss, loss_summary], feed_dict={X: X_test_enhanced, y: y_test})
file_writer.add_summary(summary_str, epoch)
if epoch % 500 == 0:
print(".", end="")
saver.save(sess, final_model_path)
print()
y_proba_val = y_proba.eval(feed_dict={X: X_test_enhanced, y: y_test})
y_pred = (y_proba_val >= 0.5)
print(" precision:", precision_score(y_test, y_pred))
print(" recall:", recall_score(y_test, y_pred))
file_writer.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating and running a graph
Step2: Managing graphs
Step3: Linear Regression
Step4: Compare with pure NumPy
Step5: Compare with Scikit-Learn
Step6: Using Batch Gradient Descent
Step7: Manually computing the gradients
Step8: Using autodiff
Step9: How could you find the partial derivatives of the following function with regards to a and b?
Step10: Let's compute the function at $a=0.2$ and $b=0.3$, and the partial derivatives at that point with regards to $a$ and with regards to $b$
Step11: Using a GradientDescentOptimizer
Step12: Using a momentum optimizer
Step13: Feeding data to the training algorithm
Step14: Mini-batch Gradient Descent
Step15: Saving and restoring a model
Step16: If you want to have a saver that loads and restores theta with a different name, such as "weights"
Step17: By default the saver also saves the graph structure itself in a second file with the extension .meta. You can use the function tf.train.import_meta_graph() to restore the graph structure. This function loads the graph into the default graph and returns a Saver that can then be used to restore the graph state (i.e., the variable values)
Step18: This means that you can import a pretrained model without having to have the corresponding Python code to build the graph. This is very handy when you keep tweaking and saving your model
Step19: In fact, let's create a function that will generate such a subdirectory path every time we need one
Step20: Now let's save the default graph to our log subdirectory using tf.summary.FileWriter()
Step21: Now the root log directory contains one subdirectory
Step22: And this subdirectory contains one log file (called a "TF events" file) for the graph
Step23: However, the actual graph data may still be in the OS's file cache, so we need to flush() or close() the FileWriter to be sure that it's well written to disk
Step24: Okay, now let's start TensorBoard! It runs as a web server in a separate process, so we first need to start it. One way to do that is to run the tensorboard command in a terminal window. Another is to use the %tensorboard Jupyter extension, which takes care of starting TensorBoard, and it allows us to view TensorBoard's user interface directly within Jupyter. Let's load this extension now
Step25: Next, let's use the %tensorboard extension to start the TensorBoard server. We need to point it to the root log directory
Step26: Great! We can now visualize graphs.
Step27: Let's see if it works
Step28: Now let's look at TensorBoard again. Note that this will reuse the existing TensorBoard server since we're reusing the same root log directory
Step29: Notice that you can switch between runs by picking the log subdirectory you want from the "Run" dropdown list (at the top left).
Step30: Now let's look at TensorBoard. Try going to the SCALARS tab
Step31: Name scopes
Step32: Modularity
Step33: Much better, using a function to build the ReLUs
Step34: Even better using name scopes
Step35: Sharing Variables
Step36: Extra material
Step37: The first variable_scope() block first creates the shared variable x0, named my_scope/x. For all operations other than shared variables (including non-shared variables), the variable scope acts like a regular name scope, which is why the two variables x1 and x2 have a name with a prefix my_scope/. Note however that TensorFlow makes their names unique by adding an index
Step38: Autodiff
Step39: Let's take a peek at the dataset
Step40: We must not forget to add an extra bias feature ($x_0 = 1$) to every instance. For this, we just need to add a column full of 1s on the left of the input matrix $\mathbf{X}$
Step41: Let's check
Step42: Looks good. Now let's reshape y_train to make it a column vector (i.e. a 2D array with a single column)
Step43: Now let's split the data into a training set and a test set
Step44: Ok, now let's create a small function to generate training batches. In this implementation we will just pick random instances from the training set for each batch. This means that a single batch may contain the same instance multiple times, and also a single epoch may not cover all the training instances (in fact it will generally cover only about two thirds of the instances). However, in practice this is not an issue and it simplifies the code
Step45: Let's look at a small batch
Step46: Great! Now that the data is ready to be fed to the model, we need to build that model. Let's start with a simple implementation, then we will add all the bells and whistles.
Step47: The moons dataset has two input features, since each instance is a point on a plane (i.e., 2-Dimensional)
Step48: Now let's build the Logistic Regression model. As we saw in chapter 4, this model first computes a weighted sum of the inputs (just like the Linear Regression model), and then it applies the sigmoid function to the result, which gives us the estimated probability for the positive class
Step49: In fact, TensorFlow has a nice function tf.sigmoid() that we can use to simplify the last line of the previous code
Step50: As we saw in chapter 4, the log loss is a good cost function to use for Logistic Regression
Step51: But we might as well use TensorFlow's tf.losses.log_loss() function
Step52: The rest is pretty standard
Step53: All we need now (in this minimal version) is the variable initializer
Step54: And we are ready to train the model and use it for predictions!
Step55: Note
Step56: To classify each instance, we can go for maximum likelihood
Step57: Depending on the use case, you may want to choose a different threshold than 0.5
Step58: Let's plot these predictions to see what they look like
Step59: Well, that looks pretty bad, doesn't it? But let's not forget that the Logistic Regression model has a linear decision boundary, so this is actually close to the best we can do with this model (unless we add more features, as we will show in a second).
Step60: This is what the "enhanced" training set looks like
Step61: Ok, next let's reset the default graph
Step62: Now let's define the logistic_regression() function to create the graph. We will leave out the definition of the inputs X and the targets y. We could include them here, but leaving them out will make it easier to use this function in a wide range of use cases (e.g. perhaps we will want to add some preprocessing steps for the inputs before we feed them to the Logistic Regression model).
Step63: Let's create a little function to get the name of the log directory to save the summaries for Tensorboard
Step64: Next, let's create the graph, using the logistic_regression() function. We will also create the FileWriter to save the summaries to the log directory for Tensorboard
Step65: At last we can train the model! We will start by checking whether a previous training session was interrupted, and if so we will load the checkpoint and continue training from the epoch number we saved. In this example we just save the epoch number to a separate file, but in chapter 11 we will see how to store the training step directly as part of the model, using a non-trainable variable called global_step that we pass to the optimizer's minimize() method.
Step66: Once again, we can make predictions by just classifying as positive all the instances whose estimated probability is greater or equal to 0.5
Step67: Now that's much, much better! Apparently the new features really helped a lot.
Step68: Now you can play around with the hyperparameters (e.g. the batch_size or the learning_rate) and run training again and again, comparing the learning curves. You can even automate this process by implementing grid search or randomized search. Below is a simple implementation of a randomized search on both the batch size and the learning rate. For the sake of simplicity, the checkpoint mechanism was removed.
|
13,332
|
<ASSISTANT_TASK:>
Python Code:
def find_df(v, p, u, tau):
return -digamma(v/2.) + log(v/2.) + (tau * (log(u) - u)).sum()/tau.sum() + 1 + (digamma((v+p)/2.)-log((v+p)/2.))
u_test = np.array([[1,1], [2,2], [3,3]])
tau_test = np.array([[4,4], [5,5], [6,6]])
find_df(1, 2, u_test, tau_test)
def get_random(X):
size = len(X)
idx = np.random.choice(range(size))
return X[idx]
actual_mu01 = [-.2, .45]
actual_cov01 = [[.40, 0], [.7, 1.55]]
actual_df01 = 27
actual_mu02 = [.9, -.5]
actual_cov02 = [[1.5, 0.7], [0, 0.5]]
actual_df02 = 47
size = 300
x01 = multivariate_t_rvs(m=actual_mu01, S=actual_cov01, df=actual_df01, n=size)
x02 = multivariate_t_rvs(m=actual_mu02, S=actual_cov02, df=actual_df02, n=size)
X = np.concatenate([x01, x02])
X.shape
xmin, xmax = min(X.T[0]), max(X.T[0])
ymin, ymax = min(X.T[1]), max(X.T[1])
x, y = np.mgrid[xmin:xmax:.1, ymin:ymax:.1]
xy = np.column_stack([x.ravel(),y.ravel()])
xy.shape
t01 = multivariate_t(actual_mu01, actual_cov01, actual_df01)
t02 = multivariate_t(actual_mu02, actual_cov02, actual_df02)
z01 = []
z02 = []
for _ in xy:
z01.append(t01.pdf(_.reshape(1, -1)))
z02.append(t02.pdf(_.reshape(1, -1)))
z01 = np.reshape(z01, x.shape)
z02 = np.reshape(z02, x.shape)
# Plotting
fig = plt.figure(figsize=(14, 5))
plt.subplot(121)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z01, cmap='ocean')
plt.contour(x, y, z02, cmap='hot')
plt.subplot(122)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z01+z02)
fig.savefig('draft05 - actual.png')
plt.show()
n_iter = 50 # number of iterations
# guessing mixture 01
mu01 = get_random(X)
cov01 = np.cov(X.T.copy())
# known variables mix01
df01 = 4
p01 = 2
# guessing mixture 02
mu02 = get_random(X)
cov02 = np.cov(X.T.copy())
# known variables mix 02
df02 = 4
p02 = 2
# guessing the pi parameter
pi = .5
t01 = multivariate_t(mu01, cov01, df01)
t02 = multivariate_t(mu02, cov02, df02)
start = time.time()
for i in range(n_iter):
# E-step: Calculating tau
wp1 = t01.pdf(X) * pi
wp2 = t02.pdf(X) * (1 - pi)
wp_total = wp1 + wp2
wp1 /= wp_total; wp1 = wp1.reshape(-1, 1)
wp2 /= wp_total; wp2 = wp2.reshape(-1, 1)
# E-Step: Calculating u
u01 = []
for delta in X-mu01:
u01.append(delta.dot(inv(cov01)).dot(delta))
u01 = np.array(u01)
u01 = (df01 + p01)/(df01 + u01); u01 = u01.reshape(-1, 1)
u02 = []
for delta in X-mu02:
u02.append(delta.dot(inv(cov02)).dot(delta))
u02 = np.array(u02)
u02 = (df02 + p02)/(df02 + u02); u02 = u02.reshape(-1, 1)
# CM-Step 01
mu01, cov01 = m_step(X, mu01, cov01, u01, wp1)
mu02, cov02 = m_step(X, mu02, cov02, u02, wp2)
# E-Step 02
u01 = []
for delta in X-mu01:
u01.append(delta.dot(inv(cov01)).dot(delta))
u01 = np.array(u01)
u01 = (df01 + p01)/(df01 + u01); u01 = u01.reshape(-1, 1)
u02 = []
for delta in X-mu02:
u02.append(delta.dot(inv(cov02)).dot(delta))
u02 = np.array(u02)
u02 = (df02 + p02)/(df02 + u02); u02 = u02.reshape(-1, 1)
# CM-Step 02
## Finding mix01 degrees of freedom
v01 = 0
my_range = np.arange(df01, df01+3, .01)
for _ in my_range:
solution = find_df(_, p01, u01, wp1)
if solution < 0+1e-4 and solution > 0-1e-4:
v01 = _
break
## Finding mix01 degrees of freedom
v02 = 0
my_range = np.arange(df02, df02+3, .01)
for _ in my_range:
solution = find_df(_, p02, u02, wp2)
if solution < 0+1e-4 and solution > 0-1e-4:
v02 = _
break
# Assigning parameters
t01.mu = mu01; t01.sigma = cov01
t02.mu = mu02; t02.sigma = cov02
df01 = v01; df02 = v02
pi = wp1.sum()/len(wp1)
print 'elapsed time: %s' % (time.time() - start)
print 'pi: {0:4.06}'.format(pi)
print 'mu01: {0}; mu02: {1}'.format(mu01, mu02)
print 'cov01\n%s' % cov01
print 'cov02\n%s' % cov02
print 'df01: %.6f; df02: %.6f;' % (df01, df02)
xmin, xmax = min(X.T[0]), max(X.T[0])
ymin, ymax = min(X.T[1]), max(X.T[1])
x, y = np.mgrid[xmin:xmax:.1, ymin:ymax:.1]
xy = np.column_stack([x.ravel(),y.ravel()])
xy.shape
t01 = multivariate_t(mu01, cov01, df01)
t02 = multivariate_t(mu02, cov02, df02)
z01 = []
z02 = []
z03 = []
for _ in xy:
_ = _.reshape(1, -1)
z01.append(t01.pdf(_))
z02.append(t02.pdf(_))
z03.append(pi*t01.pdf(_) + (1-pi)*t02.pdf(_))
z01 = np.reshape(z01, x.shape)
z02 = np.reshape(z02, x.shape)
z03 = np.reshape(z03, x.shape)
fig = plt.figure(figsize=(14, 5))
plt.subplot(121)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z01, cmap='ocean')
plt.contour(x, y, z02, cmap='hot')
plt.subplot(122)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z03)
fig.savefig('draft05 - estimated.png')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Expectation Maximization with Mixtures
Step2: Plotting the sample with actual parameters
Step3: Estimating parameters
|
13,333
|
<ASSISTANT_TASK:>
Python Code:
def Centered_Triangular_num(n ) :
return(3 * n * n + 3 * n + 2 ) // 2
if __name__== ' __main __' :
n = 3
print(Centered_Triangular_num(n ) )
n = 12
print(Centered_Triangular_num(n ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
13,334
|
<ASSISTANT_TASK:>
Python Code:
from precovery_utils import ssoisPrecovery
ssois_query = ssoisPrecovery()
query_url = ssois_query.format_search_by_arc_url('kbmod_mpc.dat')
print(query_url)
results_df = ssois_query.query_ssois(query_url)
results_df.head()
from IPython.display import HTML
image_data_link = results_df["MetaData"].iloc[-1]
HTML('<a href="{}">{}</a>'.format(image_data_link, image_data_link))
%pylab inline
from ephem_utils import mpc_reader
kbmod_observations = mpc_reader('kbmod_mpc.dat')
scatter(kbmod_observations.ra.deg, kbmod_observations.dec.deg, marker='x', s=200, c='r', label='KBMOD Observations', zorder=10)
plt.legend()
scatter(results_df['Object_RA'], results_df['Object_Dec'], c=results_df['MJD'])
cbar = plt.colorbar()
plt.xlabel('RA')
plt.ylabel('Dec')
cbar.set_label('MJD')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Query URL from MPC formatted file
Step2: Query service via URL
Step3: Create direct data download link
Step4: Compare KBMOD data to available data
|
13,335
|
<ASSISTANT_TASK:>
Python Code:
import toytree # a tree plotting library
import toyplot # a general plotting library
import numpy as np # numerical library
print(toytree.__version__)
print(toyplot.__version__)
print(np.__version__)
# load a toytree from a newick string at a URL
tre = toytree.tree("https://eaton-lab.org/data/Cyathophora.tre")
# root and draw the tree (more details on this coming up...)
rtre = tre.root(wildcard="prz")
rtre.draw(tip_labels_align=True);
# newick with edge-lengths & support values
newick = "((a:1,b:1)90:3,(c:3,(d:1, e:1)100:2)100:1)100;"
tre0 = toytree.tree(newick, tree_format=0)
# newick with edge-lengths & string node-labels
newick = "((a:1,b:1)A:3,(c:3,(d:1, e:1)B:2)C:1)root;"
tre1 = toytree.tree(newick, tree_format=1)
# parse an NHX format string with node supports and names
nhx = "((a:3[&&NHX:name=a:support=100],b:2[&&NHX:name=b:support=100]):4[&&NHX:name=ab:support=60],c:5[&&NHX:name=c:support=100]);"
ntre = toytree.tree(nhx)
# parse a mrbayes format file with NHX-like node and edge info
mb = "((a[&prob=100]:0.1[&length=0.1],b[&prob=100]:0.2[&length=0.2])[&prob=90]:0.4[&length=0.4],c[&prob=100]:0.6[&length=0.6]);"
mtre = toytree.tree(mb, tree_format=10)
# parse a NEXUS formatted file containing a tree of any supported format
nex =
#NEXUS
begin trees;
translate;
1 apple,
2 blueberry,
3 cantaloupe,
4 durian,
;
tree tree0 = [&U] ((1,2),(3,4));
end;
xtre = toytree.tree(nex)
rtre.ntips
rtre.nnodes
tre.is_rooted(), rtre.is_rooted()
rtre.get_tip_labels()
rtre.get_edges()
# a TreeNode object is contained within every ToyTree at .tree
tre.treenode
# a ToyTree object
toytree.tree("((a, b), c);")
# a MultiTree object
toytree.mtree([tre, tre, tre])
rtre.draw()
# the semicolon hides the returned text of the Canvas and Cartesian objects
rtre.draw();
# or, we can store them as variables (this allows more editing on them later)
canvas, axes, mark = rtre.draw()
# drawing with pre-built tree_styles
rtre.draw(tree_style='n'); # normal-style
rtre.draw(tree_style='d'); # dark-style
# 'ts' is also a shortcut for tree_style
rtre.draw(ts='o'); # umlaut-style
# define a style dictionary
mystyle = {
"layout": 'd',
"edge_type": 'p',
"edge_style": {
"stroke": toytree.colors[2],
"stroke-width": 2.5,
},
"tip_labels_align": True,
"tip_labels_colors": toytree.colors[0],
"tip_labels_style": {
"font-size": "10px"
},
"node_labels": False,
"node_sizes": 8,
"node_colors": toytree.colors[2],
}
# use your custom style dictionary in one or more tree drawings
rtre.draw(height=400, **mystyle);
# hover over nodes to see pop-up elements
rtre.draw(height=350, node_hover=True, node_sizes=10, tip_labels_align=True);
rtre.draw(node_labels='support', node_sizes=15);
# You can do the same without printing the 'idx' label on nodes.
rtre.draw(
node_labels=None,
node_sizes=10,
node_colors='grey'
);
tre0.get_node_values("support", show_root=1, show_tips=1)
tre0.get_node_values("support", show_root=1, show_tips=0)
tre0.get_node_values("support", show_root=0, show_tips=0)
# show support values
tre0.draw(
node_labels=tre0.get_node_values("support", 0, 0),
node_sizes=20,
);
# show support values
tre0.draw(
node_labels=tre0.get_node_values("support", 1, 1),
node_sizes=20,
);
# build a color list in node plot order with different values based on support
colors = [
toytree.colors[0] if i==100 else toytree.colors[1]
for i in rtre.get_node_values('support', 1, 1)
]
# You can do the same without printing the 'idx' label on nodes.
rtre.draw(
node_sizes=10,
node_colors=colors
);
# draw a plot and store the Canvas object to a variable
canvas, axes, mark = rtre.draw(width=400, height=300);
# for sharing through web-links (or even email!) html is great!
toyplot.html.render(canvas, "/tmp/tree-plot.html")
# for creating scientific figures SVG is often the most useful format
import toyplot.svg
toyplot.svg.render(canvas, "/tmp/tree-plot.svg")
import toyplot.pdf
toyplot.pdf.render(canvas, "/tmp/tree-plot.pdf")
# set dimensions of the canvas
canvas = toyplot.Canvas(width=700, height=250)
# dissect canvas into multiple cartesian areas (x1, x2, y1, y2)
ax0 = canvas.cartesian(bounds=('10%', '45%', '10%', '90%'))
ax1 = canvas.cartesian(bounds=('55%', '90%', '10%', '90%'))
# call draw with the 'axes' argument to pass it to a specific cartesian area
style = {
"tip_labels_align": True,
"tip_labels_style": {
"font-size": "9px"
},
}
rtre.draw(axes=ax0, **style);
rtre.draw(axes=ax1, tip_labels_colors='indigo', **style);
# hide the axes (e.g, ticks and splines)
ax0.show=False
ax1.show=False
# store the returned Canvas and Axes objects
canvas, axes, makr = rtre.draw(
width=300,
height=300,
tip_labels_align=True,
tip_labels=False,
)
# show the axes coordinates
axes.show = True
axes.x.ticks.show = True
axes.y.ticks.show = True
# overlay a grid
axes.hlines(np.arange(0, 13, 2), style={"stroke": "red", "stroke-dasharray": "2,4"})
axes.vlines(0, style={"stroke": "blue", "stroke-dasharray": "2,4"});
# store the returned Canvas and Axes objects
canvas, axes, mark = rtre.draw(
width=300,
height=300,
tip_labels=False,
tip_labels_align=True,
layout='d',
)
# show the axes coordinates
axes.show = True
axes.x.ticks.show = True
axes.y.ticks.show = True
# overlay a grid
axes.vlines(np.arange(0, 13, 2), style={"stroke": "red", "stroke-dasharray": "2,4"})
axes.hlines(0, style={"stroke": "blue", "stroke-dasharray": "2,4"});
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and draw your first tree
Step2: Parsing Newick/Nexus data
Step4: To parse either format you can tell toytree the format of the newick string following the tree parsing formats in ete. The default option, and most common format is 0. If you don't enter a tree_format argument the default format will usually parse it just fine. Toytree can also parse extended newick format (nhx) files, which store additional metadata, as well as mrbayes formatted files (tree_format=10) which are a variant of NHX. Any of these formats can be parsed from a NEXUS file automatically.
Step5: Accessing tree data
Step6: Tree Classes
Step7: Drawing trees
Step8: Drawing trees
Step9: Drawing trees
Step10: In the example above the labels on each node indicate their "idx" value, which is simply a unique identifier given to every node. We could alternatively select one of the features that you could see listed on the node when you hovered over it and toytree will display that value on the node instead. In the example below we plot the node support values. You'll notice that in this context no values were shown for the tip nodes, but instead only for internal nodes. More on this below.
Step11: You can also create plots with the nodes shown, but without node labels. This is often most useful when combined with mapping different colors to nodes to represent different classes of data. In the example below we pass a single color and size for all nodes.
Step12: You can draw values on all the nodes, or only on non-tip nodes, or only on internal nodes (not tips or root). Use the .get_node_values function of ToyTrees to build a list of values for plotting on the tree. Because the data are extracted from the same tree they will be plotted on the values will always be ordered properly.
Step13: Because .get_node_values() returns values in node plot order, it is especially useful for building lists of values for color mapping on nodes. Here we map different colors to nodes depending on whether the support value is 100 or not.
Step14: Drawing
Step15: HTML rendering is the default format. This will save the figure as a vector graphic (SVG) wrapped in HTML with additional optional javascript wrapping for interactive features. You can share the file with others and anyone can open it in a browser. You can embed it on your website, or even display it in emails!
Step16: Optional formats
Step17: Despite the advantages of working with the SVG or HTML formats (e.g., vector graphics and interactive pop-ups), if you're like me you still sometimes love to have an old-fashioned PDF. Again, you can import this from toyplot.
Step18: Drawing
Step19: The Coordinates
|
13,336
|
<ASSISTANT_TASK:>
Python Code:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
PATH = "data/planet/"
!ls {PATH}
from fastai.plots import *
def get_1st(path): return glob(f'{path}/*.*')[0]
dc_path = "data/dogscats/valid/"
list_paths = [get_1st(f"{dc_path}cats"), get_1st(f"{dc_path}dogs")]
plots_from_files(list_paths, titles=["cat", "dog"], maintitle="Single-label Classification")
list_paths = [f"{PATH}train-jpg/train_0.jpg", f"{PATH}train-jpg/train_1.jpg"]
titles = ["haze primary", "agriculture clear primary water"]
plots_from_files(list_paths, titles=titles, maintitle="Multi-label Classification")
from planet import f2
metrics = [f2]
f_model = resnet34
label_csv = f'{PATH}train_v2.csv'
n = len(list(open(label_csv)))-1
val_idxs = get_cv_idxs(n)
def get_data(sz,bs=64):
tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_top_down, max_zoom=1.05)
return ImageClassifierData.from_csv(PATH, 'train-jpg', label_csv, tfms=tfms, bs=bs,
suffix='.jpg', val_idxs=val_idxs, test_name='test-jpg')
data = get_data(256) # the planet images are 256x256
# turning a dataloader into an iterator:
x,y = next(iter(data.val_dl)) # note: .._ds: dataset; .._dl: dataloader | PyTorch concepts
# idea: dataset gives you a single image or object back
# dataloader gives you back a single (transformed) mini-batch (next mini-batch only)
y # 64x17: batch size x number of classes
# zip, basically zips two lists together into an iterator
list(zip(data.classes, y[0])) # getting the 0th image's labels -- from validation set
data.val_ds.fnames[:15]
plt.imshow(data.val_ds.get_x(0))
plt.imshow(data.val_ds.denorm(to_np(x))[0]*1.4);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 Multi-Label versus Single-Label Classification
Step2: In single-label classification each sample belongs to one class. In the previous example, each image is either a dog or a cat.
Step3: Softmax wouldn't be good because it wants to "pick a thing". Instead, we'll use the Sigmoid. In multi-label clsfn, each sample can belong to one or more classes. In the previous example, the 1st images belong to two classes
Step4: We use a different set of DAs for this dataset -- we also allow vertical flips, since we don't expect the vertical orientation of satellite images to change our classifications.
Step5: Behind the scenes, FastAI & PyTorch are turning our labels into 1-Hot Encoded Labels.
|
13,337
|
<ASSISTANT_TASK:>
Python Code:
def raiz(x_l, x_u):
x_r = (x_l + x_u)/2
return x_r
def intervalo_de_raiz(f, x_l, x_u):
x_r = raiz(x_l, x_u)
if f(x_l)*f(x_r) < 0:
x_u = x_r
if f(x_l)*f(x_r) > 0:
x_l = x_r
return x_l, x_u
def biseccion(f, x_inferior, x_superior):
print("{0:2s}\t{1:12s}\t{2:12s}\t{3:12s}\t{4:16s}".format(' i', 'x inferior', 'x superior', 'raiz', 'error relativo %'))
x_raiz_actual = raiz(x_inferior, x_superior)
i = 0
print("{0:2d}\t{1:12.10f}\t{2:12.10f}\t{3:12.10f}\t{4:16s}".format(i, x_inferior, x_superior, x_raiz_actual, '????????????????'))
error_permitido = 0.000001
while True:
x_raiz_anterior = x_raiz_actual
x_inferior, x_superior = intervalo_de_raiz(f, x_inferior, x_superior)
x_raiz_actual = raiz(x_inferior, x_superior)
if x_raiz_actual != 0:
error_relativo = abs((x_raiz_actual - x_raiz_anterior)/x_raiz_actual)*100
i = i + 1
print("{0:2d}\t{1:12.10f}\t{2:12.10f}\t{3:12.10f}\t{4:16.13f}".format(i, x_inferior, x_superior, x_raiz_actual, error_relativo))
if (error_relativo < error_permitido) or (i>=20):
break
print('\nx =', x_raiz_actual)
def f(x):
y = x**3 + 4*x**2 - 10
return y
intervalo_de_raiz(f, 1, 2)
intervalo_de_raiz(f, 1, 1.5)
intervalo_de_raiz(f, 1.25, 1.5)
biseccion(f, 1, 2)
from math import sin, cos
def g(x):
y = sin(10*x) + cos(3*x)
return y
intervalo_de_raiz(g, 14, 15)
intervalo_de_raiz(g, 14.5, 15)
intervalo_de_raiz(g, 14.75, 15)
biseccion(g, 14, 15)
biseccion(g, 12, 16)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementación no vectorizada
Step2: Ejemplo 2
Step3: Ejemplo 3
Step4: Ejemplo 4
|
13,338
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import math
from numba import vectorize, cuda
from matplotlib import pyplot as plt
%matplotlib inline
!find / -iname 'libdevice'
!find / -iname 'libnvvm.so'
import os
os.environ['NUMBAPRO_LIBDEVICE'] = "/usr/local/cuda-10.0/nvvm/libdevice"
os.environ['NUMBAPRO_NVVM'] = "/usr/local/cuda-10.0/nvvm/lib64/libnvvm.so"
@vectorize(['int64(int64, int64)'], target='cuda')
def add_ufunc(x, y):
return x + y
x = np.arange(10)
y = 2 * x
add_ufunc(x, y)
%timeit add_ufunc(x, y) # add code here
@vectorize(['int64(int64, int64)'], target='cpu') # add code here
def add_ufunc_cpu(x, y):
return x + y
%timeit add_ufunc_cpu(x, y)
@vectorize(['float32(float32, float32)'], target='cuda') # add code here
def add_ufunc(x, y):
return x + y
n = 100000
x = np.arange(n).astype(np.float32)
y = 2 * x
x_device = cuda.to_device(x)
y_device = cuda.to_device(y)
print(x_device)
print(x_device.shape)
print(x_device.dtype)
%timeit add_ufunc(x, y) # add code here
%timeit add_ufunc(x_device, y_device) # add code here
out_device = cuda.device_array(shape=(n,), dtype=np.float32) # does not initialize the contents, much like np.empty()
%timeit add_ufunc(x_device, y_device, out=out_device)
out_host = out_device.copy_to_host()
print(out_host[:10])
@vectorize(['float32(float32, float32)'], target='cpu') # add code here
def add_ufunc_cpu(x, y):
return x + y
%timeit add_ufunc_cpu(x, y)
@vectorize(['float32(float32, float32)'], target='cuda')
def add_ufunc(x, y):
return x + y
@vectorize(['float32(float32, float32, float32)'], target='cuda')
def make_pulses(i, period, amplitude):
return max(math.sin(i / period) - 0.3, 0.0) * amplitude
n = 100000
noise = (np.random.normal(size=n) * 3).astype(np.float32)
t = np.arange(n, dtype=np.float32)
period = n / 23
pulses = make_pulses(t, period, 100.0)
waveform = add_ufunc(pulses, noise)
plt.plot(waveform)
t_device = cuda.to_device(t)
noise_device = cuda.to_device(noise)
pulses_device = cuda.device_array(shape=(n,), dtype=np.float32)
waveform_device = cuda.device_array(shape=(n,), dtype=np.float32)
%timeit make_pulses(t_device, period, 100.0, out=pulses_device)
%timeit add_ufunc(pulses_device, noise_device, out=waveform_device)
waveform = waveform_device.copy_to_host()
plt.plot(waveform)
@cuda.jit
def add_kernel(x, y, out):
tidx = cuda.threadIdx.x # this is the unique thread ID within a 1D block
bidx = cuda.blockIdx.x # Similarly, this is the unique block ID within the 1D grid
block_dimx = cuda.blockDim.x # number of threads per block
grid_dimx = cuda.gridDim.x # number of blocks in the grid
start = tidx + bidx * block_dimx
stride = block_dimx * grid_dimx
# assuming x and y inputs are same length
for i in range(start, x.shape[0], stride):
out[i] = x[i] + y[i]
n = 100000
x = np.arange(n).astype(np.float32)
y = 2 * x
out = np.empty_like(x)
threads_per_block = 128
blocks_per_grid = 30
add_kernel[blocks_per_grid, threads_per_block](x, y, out)
print(out[:10])
@cuda.jit
def add_kernel(x, y, out):
start = cuda.grid(1) # the 1 argument means a one dimensional thread grid, this returns a single value
stride = cuda.gridsize(1) # ditto
# assuming x and y inputs are same length
for i in range(start, x.shape[0], stride):
out[i] = x[i] + y[i]
# add code here
x_device = cuda.to_device(x)
y_device = cuda.to_device(y)
out_device = cuda.device_array_like(x)
# add code here
%timeit add_kernel[blocks_per_grid, threads_per_block](x, y, out)
%timeit add_kernel[blocks_per_grid, threads_per_block](x_device, y_device, out_device); out = out_device.copy_to_host()
@cuda.jit
def thread_counter_race_condition(global_counter):
global_counter[0] += 1 # This is bad
@cuda.jit
def thread_counter_safe(global_counter):
cuda.atomic.add(global_counter, 0, 1) # Safely add 1 to offset 0 in global_counter array
# This gets the wrong answer
global_counter = cuda.to_device(np.array([0], dtype=np.int32))
thread_counter_race_condition[64, 64](global_counter)
print('Should be %d:' % (64*64), global_counter.copy_to_host())
# This works correctly
global_counter = cuda.to_device(np.array([0], dtype=np.int32))
thread_counter_safe[64, 64](global_counter)
print('Should be %d:' % (64*64), global_counter.copy_to_host())
def cpu_histogram(x, xmin, xmax, histogram_out):
'''Increment bin counts in histogram_out, given histogram range [xmin, xmax).'''
# Note that we don't have to pass in nbins explicitly, because the size of histogram_out determines it
nbins = histogram_out.shape[0]
bin_width = (xmax - xmin) / nbins
# This is a very slow way to do this with NumPy, but looks similar to what you will do on the GPU
for element in x:
bin_number = np.int32((element - xmin)/bin_width)
if bin_number >= 0 and bin_number < histogram_out.shape[0]:
# only increment if in range
histogram_out[bin_number] += 1
x = np.random.normal(size=10000, loc=0, scale=1).astype(np.float32)
xmin = np.float32(-4.0)
xmax = np.float32(4.0)
histogram_out = np.zeros(shape=10, dtype=np.int32)
cpu_histogram(x, xmin, xmax, histogram_out)
histogram_out
@cuda.jit
def cuda_histogram(x, xmin, xmax, histogram_out):
'''Increment bin counts in histogram_out, given histogram range [xmin, xmax).'''
# add code here
start = cuda.grid(1)
stride = cuda.gridsize(1)
nbins = histogram_out.shape[0]
bin_width = (xmax - xmin) / nbins
for i in range(start, x.shape[0], stride):
bin_number = np.int32((x[i] - xmin)/bin_width)
if bin_number >= 0 and bin_number < histogram_out.shape[0]:
cuda.atomic.add(histogram_out, bin_number, 1)
threads_per_block = 128
blocks_per_grid = 30
histogram_out = np.zeros(shape=10, dtype=np.int32)
cuda_histogram[blocks_per_grid, threads_per_block](x, xmin, xmax, histogram_out)
histogram_out
def julia(x, y, max_iters):
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Julia
set given a fixed number of iterations.
i = 0
c = complex(-0.8, 0.156)
a = complex(x,y)
for i in range(max_iters):
a = a*a + c
if (a.real*a.real + a.imag*a.imag) > 1000:
return 0
return 255
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = julia(real, imag, iters)
image[y, x] = color
return image
image = np.zeros((500, 750), dtype=np.uint8)
create_fractal(-2.0, 2.0, -1.0, 1.0, image, 200)
plt.imshow(image)
plt.viridis()
plt.show()
@cuda.jit(device=True)
def julia(x, y, max_iters):
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Julia
set given a fixed number of iterations.
i = 0
c = complex(-0.8, 0.156)
a = complex(x,y)
for i in range(max_iters):
a = a*a + c
if (a.real*a.real + a.imag*a.imag) > 1000:
return 0
return 255
threadsperblock = 16
xblocks = (image.shape[1] + (threadsperblock - 1)) // threadsperblock
yblocks = (image.shape[0] + (threadsperblock - 1)) // threadsperblock
@cuda.jit
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
x, y = cuda.grid(2)
if x < width and y < height:
real = min_x + x * pixel_size_x
imag = min_y + y * pixel_size_y
color = julia(real, imag, iters)
image[y, x] = color
image = np.zeros((500, 750), dtype=np.uint8)
threadsperblock = 16
threads_per_block = (16, 16)
xblocks = (image.shape[1] + (threadsperblock - 1)) // threadsperblock
yblocks = (image.shape[0] + (threadsperblock - 1)) // threadsperblock
blocks_per_grid = (xblocks, yblocks)
%timeit create_fractal[blocks_per_grid, threads_per_block](-2.0, 2.0, -1.0, 1.0, image, 200)
plt.imshow(image)
plt.viridis()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 0 - Accessing the GPU
Step2: Paste the location of the libraries into the following code box (if it's different, otherwise you can just run the code)
Step3: And that should do it! Okay, now that we've pointed Numba to the correct libraries, let's get going. To start, we are going to return to the first function we created yesterday - the vector add.
Step4: Cool, it worked! But what actually just happened? Well, a lot of things. Numba automatically
Step5: 1b) Wow, the GPU is a LOT slower! Why might that be?
Step6: Now, let's give it a bit more work to do
Step7: As we saw in the last problem, copying the data to and from the GPU for every function is not necessarily the most efficient way to use the GPU. To address this, Numba provides the to_device function in the cuda module to allocate and copy arrays to the GPU
Step8: x_device and y_device are now Numba "device arrays" that are in many ways equivalent to Numpy ndarrays except that they live in the GPU's global memory, rather than on the CPU. These device arrays can be passed to Numba cuda functions just the way Numpy arrays can, but without the memory copying overhead.
Step9: You should see a big performance improvement already, but we are still allocating a device array for the output of the ufunc and copying it back to the host. We can create an output buffer on the GPU with the numba.cuda.device_array() function
Step10: And then we can use a special out keyword argument to the ufunc to specify the output buffer
Step11: You should see an even bigger improvement. Once we've finished all of our calculations on the GPU, we can copy the array back from the device using the copy_to_host method
Step12: 2c) Remake a new version of the addition ufunc with 32bit floats that targets the cpu. Compare the resulting time to execute with the gpu version you just timed.
Step13: 2d) Now go back and try the two functions (gpu and cpu) with even larger arrays. When does the GPU start to win? Does the execution time on the GPU scale with the number of array elements the same way that the CPU version does?
Step14: Convert the following code to use device allocations so that there are only host<->device copies at the beginning and end. Then benchmark the performance change.
Step15: Problem 3 - Writing Cuda Kernels
Step16: That's a lot more typing than our ufunc example, and it is much more limited
Step17: The calling syntax is designed to mimic the way CUDA kernels are launched in C, where the number of blocks per grid and threads per block are specified in the square brackets, and the arguments to the function are specified afterwards in parentheses.
Step18: As before, using NumPy arrays forces Numba to allocate GPU memory, copy the arguments to the GPU, run the kernel, then copy the argument arrays back to the host. This not very efficient, so you will often want to allocate device arrays.
Step19: Atomic Operations and avoiding Race Conditions
Step20: 3b) Let's practice writing a function that requires an atomic operation - a histogramming kernel. This will take an array of input data, a range and a number of bins, and count how many of the input data elements land in each bin. Below is an example CPU implementation of histogramming
Step21: In the space below, create a cuda version of this kernel, then run it to check that you get the same answer as the CPU version.
Step23: Problem 4 - Return to the Fractals!
Step25: In order to turn this into a GPU implementation, we'd like to have a kernel function (create_fractal) call another function (julia) on the device. Numba has a way of specifying functions that will be called from within a kernel by passing the cuda.jit decorator an argument
Step26: Multi-dimensional grids
Step27: Then, within a kernel, you can determine the absolute thread position by calling the grid helper function, as in x, y = cuda.grid(2).
|
13,339
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import sklearn
features = load_data()
from sklearn.preprocessing import MultiLabelBinarizer
new_features = MultiLabelBinarizer().fit_transform(features)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
13,340
|
<ASSISTANT_TASK:>
Python Code:
import threading
from _thread import start_new_thread, allocate_lock
import logging
import time
import numpy as np
def worker1():
print(threading.currentThread().getName(), '--begin')
time.sleep(3)
print(threading.currentThread().getName(), '--end')
def worker2():
print(threading.currentThread().getName(), '--BEGIN')
time.sleep(2)
print(threading.currentThread().getName(), '--END')
w1 = threading.Thread(name='firstWorker', target=worker1)
w2 = threading.Thread(name='secondWorker', target=worker2)
w3 = threading.Thread(name='thirdWorker', target=worker1)
w4 = threading.Thread(target=worker2)
w1.start()
w2.start()
w3.start()
w4.start()
# threads run asynchronously and look messy in the results.
# So, logging module can help tracking threads.
logging.basicConfig(level=logging.DEBUG,
format='[%(levelname)s] (%(threadName)-10s) %(message)s')
def worker1():
logging.debug('--begin') # <-- use logging instead of print function!
time.sleep(3)
logging.debug('--end')
def worker2():
logging.debug('--BEGIN')
time.sleep(2)
logging.debug('--END')
w1 = threading.Thread(name='firstWorker', target=worker1)
w2 = threading.Thread(name='secondWorker', target=worker2)
w3 = threading.Thread(name='thirdWorker', target=worker1)
w4 = threading.Thread(target=worker2)
w1.start()
w2.start()
w3.start()
w4.start()
num_threads = 0
def elephant(a):
global num_threads
num_threads += 1
time.sleep(0.1)
num_threads -= 1
print(num_threads)
start_new_thread(elephant,(99,))
start_new_thread(elephant,(999,))
start_new_thread(elephant,(1733,))
start_new_thread(elephant,(17334,))
if num_threads == 0:
print('num_threads is 0')
else:
print('num_threads is not 0')
# The output is strange and it also changes every time you run.
# That's because the global variable num_threads are accessed and changed simultaneously!
# Using locking method can help salvage this situation.
num_threads = 0
lock = allocate_lock() # make a lock object
def elephant(a):
global num_threads
lock.acquire() # start locking
num_threads += 1
time.sleep(0.1)
num_threads -= 1
lock.release() # releasing locking
print(num_threads)
start_new_thread(elephant,(99,))
start_new_thread(elephant,(999,))
start_new_thread(elephant,(1733,))
start_new_thread(elephant,(17334,))
if num_threads == 0:
print('num_threads is 0')
else:
print('num_threads is not 0')
# Daemon threads die when the main program dies.
# Non-daemon threads do not die until the work is done.
# NB. Default is non-daemon mode.
logging.basicConfig(level=logging.DEBUG,
format='(%(threadName)-10s) %(message)s')
def daemon():
logging.debug('Start')
time.sleep(3)
logging.debug('Done')
d = threading.Thread(name='Daemon', target=daemon)
d.setDaemon(True)
def non_daemon():
logging.debug('Start')
logging.debug('Done')
t = threading.Thread(name='Non-daemon', target=non_daemon)
d.start()
t.start()
# It's supposed not to show the last line "[DEBUG] (Daemon ) Done"
# But, Jupyter Notebook seemingly runs each cell as a program.
# If you run in the command line as
# $ python test.py
# it will throw logs:
# [DEBUG] (Daemon ) Start
# [DEBUG] (Non-daemon) Start
# [DEBUG] (Non-daemon) Done
#
# This program exits before the daemon thread finishes.
#
# Adding below lines will make the program waits until daemon threads finish.
#
# d.join()
# t.join()
logging.basicConfig(level=logging.DEBUG,
format='(%(threadName)-10s) %(message)s')
class SimpleThread(threading.Thread):
def run(self):
logging.debug('Running now...')
return
for i in range(4):
t = SimpleThread()
t.start()
# If you want to customize threads,
# make instance attributes so that they can be seen.
logging.basicConfig(level=logging.DEBUG,
format='(%(threadName)-10s) %(message)s')
class AwesomeThreads(threading.Thread):
def __init__(self, group=None, target=None, name=None,
args=(), kwargs=None, daemon=None):
# This redefining let you control threads and save values
threading.Thread.__init__(self, group=group, target=target,
name=name, daemon=daemon)
self.args = args
self.kwargs = kwargs
return
def run(self):
logging.debug('My args={}, My kwargs={}'.format(self.args, self.kwargs))
return
for i in range(6):
t = AwesomeThreads(args=(i,), kwargs={'하나':1, '둘':2})
t.start()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 1
Step2: Example 2
Step3: Example 3
Step4: Example 4
Step5: Example 5
Step6: Example 6
Step7: Example 7
|
13,341
|
<ASSISTANT_TASK:>
Python Code:
import deepchem as dc
from deepchem.models.tensorgraph.tensor_graph import TensorGraph
tg = TensorGraph(use_queue=False)
from deepchem.models.tensorgraph.layers import Feature
left_features = Feature(shape=(None, 75))
right_features = Feature(shape=(None, 75))
from deepchem.models.tensorgraph.layers import Dense
dense_left = Dense(out_channels=1, in_layers=[left_features])
dense_right = dense_left.shared(in_layers=[right_features])
from deepchem.models.tensorgraph.layers import Add
from deepchem.models.tensorgraph.layers import Label
from deepchem.models.tensorgraph.layers import L2Loss
from deepchem.models.tensorgraph.layers import ReduceMean
output = Add(in_layers=[dense_left, dense_right])
tg.add_output(output)
labels = Label(shape=(None, 1))
batch_loss = L2Loss(in_layers=[labels, output])
# Need to reduce over the loss
loss = ReduceMean(in_layers=batch_loss)
tg.set_loss(loss)
import numpy as np
import numpy.random
n_samples = 100
sampled_left_features = np.random.rand(100, 75)
sampled_right_features = np.random.rand(100, 75)
sampled_labels = np.random.rand(75, 1)
from deepchem.data.datasets import Databag
from deepchem.data.datasets import NumpyDataset
databag = Databag()
databag.add_dataset(left_features, NumpyDataset(sampled_left_features))
databag.add_dataset(right_features, NumpyDataset(sampled_right_features))
databag.add_dataset(labels, NumpyDataset(sampled_labels))
tg.fit_generator(
databag.iterbatches(epochs=100, batch_size=50, pad_batches=True))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We're going to construct an architecture that has two identical feature inputs. Let's call these feature inputs left_features and right_features.
Step2: Let's now apply a nonlinear transformation to both left_features and right_features. We can use the Dense layer to do so. In addition, let's make sure that we apply the same nonlinear transformation to both left_features and right_features. To this, we can use the Layer.shared(). We use this method by initializing a first Dense layer, and then calling the Layer.shared() method to make a copy of that layer.
Step3: Let's now combine these two transformed feature layers by addition. We will assume this network is being used to solve a regression problem, so we will introduce a Label that stores the true regression values. We can then define the objective function of the network via the L2Loss between the added output and the true label.
Step4: Let's now randomly sample an artificial dataset we can use to train this architecture. We will need to sample the left_features, right_features, and labels in order to be able to train the network.
Step5: How can we train TensorGraph networks with multiple Feature inputs? One option is to manually construct a python generator that provides inputs. The tutorial notebook on graph convolutions does this explicitly. For simpler cases, we can use the convenience object DataBag which makes it easier to construct generators. A DataBag holds multiple datasets (added via DataBag.add_dataset). The method DataBag.iterbatches() will construct a generator that peels off batches of the desired size from each dataset and return a dictionary mapping inputs (Feature, Label, and Weight objects) to data for that minibatch. Let's see DataBag in action.
Step6: Let's now train this architecture! We need to use the method TensorGraph.fit_generator() passing in a generator created by databag.iterbatches().
|
13,342
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
def get_rail_id(row):
Extract specific rail_ids from complex data structure that assigns rail_ids
(Sample IDs) to snaptron_ids (exon-exon junctions). Designed to be used
as a pd.DataFrame().apply() function.
Arguments:
row - a row in the junction dataframe
Output: a list of sample IDs with the specific snaptron ID
row = row['samples'].split(',')
all_sample_ids = []
for sample_id in row:
if sample_id != '':
sample_id = sample_id.split(':')[0]
all_sample_ids.append(sample_id)
return(all_sample_ids)
# Load Sample File
sample_file = 'samples.tsv.gz'
dictionary_df = (
pd.read_table(sample_file, low_memory=False)
.loc[:, ['rail_id', 'gdc_cases.samples.submitter_id']]
)
dictionary_df.head(2)
# Load junctions file
junction_file = 'tp53_junctions.txt.gz'
junction_df = pd.read_table(junction_file)
junction_df.head(2)
# Load mutation classification scores file
file = os.path.join('..', '..', 'classifiers', 'TP53',
'tables', 'mutation_classification_scores.tsv')
mut_scores_df = pd.read_table(file, index_col=0)
mut_scores_df.head(2)
# Load raw mutation file
file = os.path.join('..', '..', 'data', 'raw', 'mc3.v0.2.8.PUBLIC.maf.gz')
raw_mut_df = pd.read_table(file)
raw_mut_df.head()
# Load binary mutation file
file = os.path.join('..', '..', 'data', 'pancan_mutation_freeze.tsv')
mut_df = pd.read_table(file, index_col=0)
mut_df.head(2)
# Subset mutation file to samples with c375GT TP53 mutations
silent_mut_df = (
raw_mut_df.query('Hugo_Symbol == "TP53"')
.query('HGVSc == "c.375G>T"')
)
# Obtain the samples with the specific mutation
silent_mut_samples = silent_mut_df.Tumor_Sample_Barcode.str.slice(start=0, stop=15)
print(len(silent_mut_samples))
# From these, remove samples that also have a different mutation in TP53
only_silent_mut_samples = (
mut_df.reindex(silent_mut_samples)
.loc[:, 'TP53']
)
only_silent_mut_samples = (
only_silent_mut_samples.loc[only_silent_mut_samples == 0]
.index
.tolist()
)
print(len(only_silent_mut_samples))
# Select those samples in which we have mutation classification scores
mut_silent_scores_df = (
mut_scores_df
.loc[mut_scores_df.index.isin(only_silent_mut_samples), :]
)
print(mut_silent_scores_df.shape)
mut_silent_scores_df.head(2)
# Process and output junction file
out_file = 'tp53_junctions_with_mutations.csv.gz'
junctions_process_df = (
junction_df.assign(diff_start = abs(7675994 - junction_df['start']),
diff_end = abs(7675994 - junction_df['end']))
.sort_values(by = "diff_start")
)
junctions_process_df = (
junctions_process_df
.assign(rail_id = junctions_process_df.apply(get_rail_id, axis=1))
.set_index(['snaptron_id', 'start', 'end', 'length', 'strand',
'left_motif', 'right_motif', 'samples_count',
'coverage_sum', 'coverage_avg', 'coverage_median',
'diff_start', 'diff_end'])['rail_id']
.apply(pd.Series)
.stack()
.reset_index()
)
junctions_process_df[0] = junctions_process_df[0].astype(int)
junctions_process_df = (
junctions_process_df
.merge(dictionary_df, left_on=0, right_on='rail_id')
)
junctions_process_df = (
junctions_process_df
.assign(
tcga_id = junctions_process_df['gdc_cases.samples.submitter_id']
.str
.slice(start=0, stop=15)
)
.merge(mut_scores_df, left_on='tcga_id', right_index=True)
)
junctions_process_df.to_csv(out_file, compression='gzip')
junctions_process_df.head(2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Processing TP53 Exon-Exon Junction Data
Step2: First, load several files required for processing
Step3: Next, select the samples with the specific mutation of interest
Step4: Finally, process the junctions file and output the results for plotting
|
13,343
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import os
print('Esperamos trabalhar no diretório')
print(os.getcwd())
#Se usar o arquivo descompactado
#pd.read_csv('DOM2015.csv',sep=',')
base09 = pd.read_csv('DOM2009.csv',sep=',')
base13 = pd.read_csv('DOM2013.csv',sep=',')
#Se usar o arquivo compactado como está no dropbox púbico
#pd.read_csv('DOM2015csv.zip', compression='zip', header=0, sep=',', quotechar='"')
novabase09 = base09.loc[:,['UF', 'V0106','V0218', 'V2120', 'V4622']]
novabase13 = base13.loc[base13.V4621<500000,['UF', 'V0105','V0218', 'V2134', 'V4621']]
#Tabela de 2009
novabase09.V4622 = novabase09.V4622.astype('category')
novabase09.V4622.cat.categories = ('nenhuma', 'ate 1/4', 'de 1/4 ate 1/2', 'de 1/2 ate 1', 'de 1 ate 2', 'de 2 ate 3', 'de 3 ate 5', 'mais de 5','sem declaração')
novabase09.UF = novabase09.UF.astype('category')
novabase09.UF.cat.categories = ('rondonia', 'acre', 'amazonas', 'roraima', 'para', 'amapa', 'tocantins', 'maranhao', 'piaui', 'ceara', 'rio grande do norte', 'paraiba', 'pernambuco', 'alagoas','sergipe','bahia', 'minas gerais', 'espirito santo', 'rio de janeiro', 'sao paulo', 'parana','santa catarina','rio grande do sul', 'mato grosso do sul', 'mato grosso', 'goias', 'DF')
novabase09.V2120 = novabase09.V2120.astype('category')
novabase09.V2120.cat.categories = ('muito', 'medio', 'pouco')
novabase09.V0218 = novabase09.V0218.astype('category')
novabase09.V0218.cat.categories = ('coletado diretamente', 'coletado indiretamente', 'queimado', 'descarte em terreno baldio', 'descarte em rio ou mar', 'outro destino' )
#Tabela de 2013
V4621_cat = pd.cut(novabase13.V4621, bins=9)
faixa = (0,197,394, 788, 1576, 2364, 3152, 3940, 50000)
V4621_cat = pd.cut(novabase13.V4621, bins=faixa)
V4621_cat.value_counts(sort=False, normalize=True)*100
novabase13.UF = novabase13.UF.astype('category')
novabase13.UF.cat.categories = ('rondonia', 'acre', 'amazonas', 'roraima', 'para', 'amapa', 'tocantins', 'maranhao', 'piaui', 'ceara', 'rio grande do norte', 'paraiba', 'pernambuco', 'alagoas','sergipe','bahia', 'minas gerais', 'espirito santo', 'rio de janeiro', 'sao paulo', 'parana','santa catarina','rio grande do sul', 'mato grosso do sul', 'mato grosso', 'goias', 'DF')
novabase13.V2134 = novabase13.V2134.astype('category')
novabase13.V2134.cat.categories = ('perdeu peso', 'não perdeu peso')
# Tabelas 2009
#print('Quantidade de moradores por domicilios:')
#a = novabase09.V0106.value_counts(sort=True)
#ap = novabase09.V0106.value_counts(True) * 100
#print(a,'\n')
#print("Quantidade em porcentagem")
#print(ap,'\n')
#print('Salarios per capita (frequencia):')
#b = novabase09.V4622.value_counts(sort=True)
#bp = novabase09.V4622.value_counts(True) * 100
#print(b,'\n')
#print("Quantidade em porcentagem")
#print(bp,'\n')
#print('A quantidade de peso perdido:')
#c = novabase09.V2120.value_counts(sort=True)
#cp = novabase09.V2120.value_counts(True) * 100
#print(c,'\n')
#print("Quantidade em porcentagem")
#print(cp,'\n')
#print('Forma de descarte do lixo:')
#d = novabase09.V0218.value_counts(sort=True)
#dp = novabase09.V0218.value_counts(True) * 100
#print(d,'\n')
#print("Quantidade em porcentagem")
#print(dp,'\n')
#2009
#Cruzando variaveis (quantidade de moradores & perda de peso)
ct1 = pd.crosstab(novabase09.V0106, novabase09.V2120, margins=True, normalize = 'index')
#Cruzando variaveis (renda per capita & descarte de lixo)
ct2 = pd.crosstab(novabase09.V4622, novabase09.V0218, margins=True, normalize = 'index')
#Cruzando variaveis (renda per capita & quantidade de moradores)
ct3 = pd.crosstab(novabase09.V0106, novabase09.V4622, margins=True, normalize = 'index')
#Cruzando variaveis (estado & descarte de lixo )
ct4 = pd.crosstab(novabase09.UF, novabase09.V0218, margins=True, normalize = 'index')
#Cruzando variaveis (estado & perda de peso)
ct5 = pd.crosstab(novabase09.UF, novabase09.V2120, margins=True, normalize = 'index')
#2013
#Cruzando variaveis (quantidade de moradores & perda de peso)
d1 = pd.crosstab(novabase13.V0105, novabase13.V2134, margins=True, normalize = 'index')
#Cruzando variaveis (renda per capita & quantidade de moradores)
d3 = pd.crosstab(novabase13.V0105, V4621_cat, margins=True, normalize = 'index')
#2009
#Gráficos sobre variáveis cruzadas:
ct1.plot(kind='bar', color=('red', 'orange','yellow'), stacked = True)
plt.title('CT1')
plt.xlabel('Quantidade de moradores no domicílio')
plt.ylabel('Quantidade de pessoas que perderam peso (%)')
plt.tight_layout()
plt.show()
#ct2.plot(kind='bar', color=('green', 'yellow','red', 'brown', 'blue', 'black'), stacked = True)
#plt.title('CT2')
#plt.xlabel('Renda per capita')
#plt.ylabel('Descarte de lixo (%)')
#plt.tight_layout()
ct3.plot(kind='bar', color=('gray', 'red','orange', 'yellow', 'green', 'blue', 'purple', 'pink', 'black'), stacked = True)
plt.title('CT3')
plt.xlabel('Quantidade de moradores')
plt.ylabel('Renda per capita (%)')
plt.tight_layout()
plt.show()
d1.plot(kind='bar', color=('red','green'), stacked = True)
plt.title('D1')
plt.xlabel('Quantidade de moradores no domicílio')
plt.ylabel('Quantidade de pessoas que perderam peso (%)')
plt.tight_layout()
plt.show()
d3.plot(kind='bar', color=('red','orange', 'yellow', 'green', 'blue', 'purple', 'pink', 'black'), stacked = True)
plt.title('D3')
plt.xlabel('Quantidade de moradores')
plt.ylabel('Renda per capita (%)')
plt.tight_layout()
plt.show()
#c3.plot(kind='bar', color=('grey', 'red','orange', 'yellow', 'green', 'blue', 'purple', 'pink', 'black'), stacked = True)
#plt.title('C3')
#plt.xlabel('Quantidade de moradores')
#plt.ylabel('Renda per capita (%)')
#plt.tight_layout()
#plt.show()
#ct4.plot(kind='bar', color=('green', 'yellow','red', 'brown', 'blue', 'black'), stacked = True)
#plt.title('CT4')
#plt.xlabel('Estado')
#plt.ylabel('Descarte de lixo (%)')
#plt.tight_layout()
#plt.show()
#ct5.plot(kind='bar', color=('red', 'yellow', 'green'), stacked = True)
#plt.title('CT5')
#plt.xlabel('Estado')
#plt.ylabel('Quantidade de pessoas que perderam peso (%)')
#plt.tight_layout()
#plt.show()
ct1.plot(kind='hist', alpha=0.5)
plt.title('HISTOGRAMA CT1')
ct1.plot(kind='box')
plt.title('QUANTIS CT1')
ct3.plot(kind='hist', alpha=0.5)
plt.title('HISTOGRAMA CT3')
ct3.plot(kind='box')
plt.title('QUANTIS CT3')
d1.plot(kind='hist', alpha=0.5)
plt.title('HISTOGRAMA D1')
d1.plot(kind='box')
plt.title('QUANTIS D1')
d3.plot(kind='hist', alpha=0.5)
plt.title('HISTOGRAMA D3')
d3.plot(kind='box')
plt.title('QUANTIS D3')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Na célula a cima foram escolhidas as variáveis que serão utilizadas. Em sequência, temos
Step2: No código a cima foi feita a categorização das variáveis como quantitativas ou qualitativas.
Step3: Na célula a cima foi representada a quantidade absoluta e quantidade em porcentagem com o número de moradores por domicílio. Esta parte foi crucial para a formulação dos 'crosstabs' que virão a seguir, assim como para os gráficos e conclusões.
Step4: No código a cima foram definidas as crosstabs (cruzamento de variáveis). Foram utilizadas apenas as crosstabs CT1; CT2; D1 e D3.
|
13,344
|
<ASSISTANT_TASK:>
Python Code:
#the usual beginning
import pandas as pd
import numpy as np
from pandas import Series, DataFrame
from datetime import datetime, timedelta
from pandas import concat
#define any string with 'C' as NaN
def readD(val):
if 'C' in val:
return np.nan
return val
df = pd.read_csv('/Users/John/Dropbox/LLU/ROP/Pulse Ox/ROP007PO.csv',
parse_dates={'timestamp': ['Date','Time']},
index_col='timestamp',
usecols=['Date', 'Time', 'SpO2', 'PR', 'PI', 'Exceptions'],
na_values=['0'],
converters={'Exceptions': readD}
)
#parse_dates tells the read_csv function to combine the date and time column
#into one timestamp column and parse it as a timestamp.
# pandas is smart enough to know how to parse a date in various formats
#index_col sets the timestamp column to be the index.
#usecols tells the read_csv function to select only the subset of the columns.
#na_values is used to turn 0 into NaN
#converters: readD is the dict that means any string with 'C' with be NaN (for PI)
#dfclean = df[27:33][df[27:33].loc[:, ['SpO2', 'PR', 'PI', 'Exceptions']].apply(pd.notnull).all(1)]
#clean the dataframe to get rid of rows that have NaN for PI purposes
df_clean = df[df.loc[:, ['PI', 'Exceptions']].apply(pd.notnull).all(1)]
Pulse ox date/time is 1 mins and 32 seconds faster than phone. Have to correct for it.
TC = timedelta(minutes=1, seconds=32)
df_first = df.first_valid_index() #get the first number from index
Y = datetime(2015, 6, 30, 19, 2, 48)
#Y = pd.to_datetime(df_first) #convert index to datetime
# Y = TIME DATA COLLECTION BEGAN / First data point on CSV
# SYNTAX:
# datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]])
W = datetime(2015, 7, 1, 8, 30)+TC
# W = first eye drop starts
X = datetime(2015, 7, 1, 9, 49)+TC
# X = ROP Exam Started
Z = datetime(2015, 7, 1, 9, 53)+TC
# Z = ROP Exam Ended
df_last = df.last_valid_index() #get the last number from index
Q = datetime(2015, 7, 1, 13, 51, 19)
#Q = pd.to_datetime(df_last)
# Q = TIME DATA COLLECTION ENDED / Last Data point on CSV
df_clean[W:X]
avg0PI = df_clean.PI[Y:W].mean()
avg0O2 = df.SpO2[Y:W].mean()
avg0PR = df.PR[Y:W].mean()
print 'Baseline Averages\n', 'PI :\t',avg0PI, '\nSpO2 :\t',avg0O2,'\nPR :\t',avg0PR,
#df.std() for standard deviation
str(avg0PI)
# Every 5 min Average from start of eye drops to start of exam
def perdeltadrop(start, end, delta):
rdrop = []
curr = start
while curr < end:
rdrop.append(curr)
curr += delta
return rdrop
dfdropPI = df_clean.PI[W:W+timedelta(hours=1)]
dfdropO2 = df.SpO2[W:W+timedelta(hours=1)]
dfdropPR = df.PR[W:W+timedelta(hours=1)]
windrop = timedelta(minutes=5)#make the range
rdrop = perdeltadrop(W, W+timedelta(minutes=15), windrop)
avgdropPI = Series(index = rdrop, name = 'PI DurEyeD')
avgdropO2 = Series(index = rdrop, name = 'SpO2 DurEyeD')
avgdropPR = Series(index = rdrop, name = 'PR DurEyeD')
for i in rdrop:
avgdropPI[i] = dfdropPI[i:(i+windrop)].mean()
avgdropO2[i] = dfdropO2[i:(i+windrop)].mean()
avgdropPR[i] = dfdropPR[i:(i+windrop)].mean()
resultdrops = concat([avgdropPI, avgdropO2, avgdropPR], axis=1, join='inner')
print resultdrops
#AVERAGE DURING ROP EXAM FOR FIRST FOUR MINUTES
def perdelta1(start, end, delta):
r1 = []
curr = start
while curr < end:
r1.append(curr)
curr += delta
return r1
df1PI = df_clean.PI[X:X+timedelta(minutes=4)]
df1O2 = df.SpO2[X:X+timedelta(minutes=4)]
df1PR = df.PR[X:X+timedelta(minutes=4)]
win1 = timedelta(seconds=10) #any unit of time & make the range
r1 = perdelta1(X, X+timedelta(minutes=4), win1)
#make the series to store
avg1PI = Series(index = r1, name = 'PI DurEx')
avg1O2 = Series(index = r1, name = 'SpO2 DurEx')
avg1PR = Series(index = r1, name = 'PR DurEX')
#average!
for i1 in r1:
avg1PI[i1] = df1PI[i1:(i1+win1)].mean()
avg1O2[i1] = df1O2[i1:(i1+win1)].mean()
avg1PR[i1] = df1PR[i1:(i1+win1)].mean()
result1 = concat([avg1PI, avg1O2, avg1PR], axis=1, join='inner')
print result1
#AVERAGE EVERY 5 MINUTES ONE HOUR AFTER ROP EXAM
def perdelta2(start, end, delta):
r2 = []
curr = start
while curr < end:
r2.append(curr)
curr += delta
return r2
# datetime(year, month, day, hour, etc.)
df2PI = df_clean.PI[Z:(Z+timedelta(hours=1))]
df2O2 = df.SpO2[Z:(Z+timedelta(hours=1))]
df2PR = df.PR[Z:(Z+timedelta(hours=1))]
win2 = timedelta(minutes=5) #any unit of time, make the range
r2 = perdelta2(Z, (Z+timedelta(hours=1)), win2) #define the average using function
#make the series to store
avg2PI = Series(index = r2, name = 'PI q5MinHr1')
avg2O2 = Series(index = r2, name = 'O2 q5MinHr1')
avg2PR = Series(index = r2, name = 'PR q5MinHr1')
#average!
for i2 in r2:
avg2PI[i2] = df2PI[i2:(i2+win2)].mean()
avg2O2[i2] = df2O2[i2:(i2+win2)].mean()
avg2PR[i2] = df2PR[i2:(i2+win2)].mean()
result2 = concat([avg2PI, avg2O2, avg2PR], axis=1, join='inner')
print result2
#AVERAGE EVERY 15 MINUTES TWO HOURS AFTER ROP EXAM
def perdelta3(start, end, delta):
r3 = []
curr = start
while curr < end:
r3.append(curr)
curr += delta
return r3
# datetime(year, month, day, hour, etc.)
df3PI = df_clean.PI[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]
df3O2 = df.SpO2[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]
df3PR = df.PR[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]
win3 = timedelta(minutes=15) #any unit of time, make the range
r3 = perdelta3((Z+timedelta(hours=1)), (Z+timedelta(hours=2)), win3)
#make the series to store
avg3PI = Series(index = r3, name = 'PI q15MinHr2')
avg3O2 = Series(index = r3, name = 'O2 q15MinHr2')
avg3PR = Series(index = r3, name = 'PR q15MinHr2')
#average!
for i3 in r3:
avg3PI[i3] = df3PI[i3:(i3+win3)].mean()
avg3O2[i3] = df3O2[i3:(i3+win3)].mean()
avg3PR[i3] = df3PR[i3:(i3+win3)].mean()
result3 = concat([avg3PI, avg3O2, avg3PR], axis=1, join='inner')
print result3
#AVERAGE EVERY 30 MINUTES THREE HOURS AFTER ROP EXAM
def perdelta4(start, end, delta):
r4 = []
curr = start
while curr < end:
r4.append(curr)
curr += delta
return r4
# datetime(year, month, day, hour, etc.)
df4PI = df_clean.PI[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]
df4O2 = df.SpO2[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]
df4PR = df.PR[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]
win4 = timedelta(minutes=30) #any unit of time, make the range
r4 = perdelta4((Z+timedelta(hours=2)), (Z+timedelta(hours=3)), win4)
#make the series to store
avg4PI = Series(index = r4, name = 'PI q30MinHr3')
avg4O2 = Series(index = r4, name = 'O2 q30MinHr3')
avg4PR = Series(index = r4, name = 'PR q30MinHr3')
#average!
for i4 in r4:
avg4PI[i4] = df4PI[i4:(i4+win4)].mean()
avg4O2[i4] = df4O2[i4:(i4+win4)].mean()
avg4PR[i4] = df4PR[i4:(i4+win4)].mean()
result4 = concat([avg4PI, avg4O2, avg4PR], axis=1, join='inner')
print result4
#AVERAGE EVERY 60 MINUTES 4-24 HOURS AFTER ROP EXAM
def perdelta5(start, end, delta):
r5 = []
curr = start
while curr < end:
r5.append(curr)
curr += delta
return r5
# datetime(year, month, day, hour, etc.)
df5PI = df_clean.PI[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]
df5O2 = df.SpO2[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]
df5PR = df.PR[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]
win5 = timedelta(minutes=60) #any unit of time, make the range
r5 = perdelta5((Z+timedelta(hours=3)), (Z+timedelta(hours=24)), win5)
#make the series to store
avg5PI = Series(index = r5, name = 'PI q60MinHr4+')
avg5O2 = Series(index = r5, name = 'O2 q60MinHr4+')
avg5PR = Series(index = r5, name = 'PR q60MinHr4+')
#average!
for i5 in r5:
avg5PI[i5] = df5PI[i5:(i5+win5)].mean()
avg5O2[i5] = df5O2[i5:(i5+win5)].mean()
avg5PR[i5] = df5PR[i5:(i5+win5)].mean()
result5 = concat([avg5PI, avg5O2, avg5PR], axis=1, join='inner')
print result5
df_O2_pre = df[Y:W]
#Find count of these ranges
below = 0 # v <=80
middle = 0 #v >= 81 and v<=84
above = 0 #v >=85 and v<=89
ls = []
b_dict = {}
m_dict = {}
a_dict = {}
for i, v in df_O2_pre['SpO2'].iteritems():
if v <= 80: #below block
if not ls:
ls.append(v)
else:
if ls[0] >= 81: #if the range before was not below 80
if len(ls) >= 5: #if the range was greater than 10 seconds, set to 5 because data points are every 2
if ls[0] <= 84: #was it in the middle range?
m_dict[middle] = ls
middle += 1
ls = [v]
elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?
a_dict[above] = ls
above += 1
ls = [v]
else: #old list wasn't long enough to count
ls = [v]
else: #if in the same range
ls.append(v)
elif v >= 81 and v<= 84: #middle block
if not ls:
ls.append(v)
else:
if ls[0] <= 80 or (ls[0]>=85 and ls[0]<= 89): #if not in the middle range
if len(ls) >= 5: #if range was greater than 10 seconds
if ls[0] <= 80: #was it in the below range?
b_dict[below] = ls
below += 1
ls = [v]
elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?
a_dict[above] = ls
above += 1
ls = [v]
else: #old list wasn't long enough to count
ls = [v]
else:
ls.append(v)
elif v >= 85 and v <=89: #above block
if not ls:
ls.append(v)
else:
if ls[0] <=84 : #if not in the above range
if len(ls) >= 5: #if range was greater than
if ls[0] <= 80: #was it in the below range?
b_dict[below] = ls
below += 1
ls = [v]
elif ls[0] >= 81 and ls[0] <=84: #was it in the middle range?
m_dict[middle] = ls
middle += 1
ls = [v]
else: #old list wasn't long enough to count
ls = [v]
else:
ls.append(v)
else: #v>90 or something else weird. start the list over
ls = []
#final list check
if len(ls) >= 5:
if ls[0] <= 80: #was it in the below range?
b_dict[below] = ls
below += 1
ls = [v]
elif ls[0] >= 81 and ls[0] <=84: #was it in the middle range?
m_dict[middle] = ls
middle += 1
ls = [v]
elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?
a_dict[above] = ls
above += 1
b_len = 0.0
for key, val in b_dict.iteritems():
b_len += len(val)
m_len = 0.0
for key, val in m_dict.iteritems():
m_len += len(val)
a_len = 0.0
for key, val in a_dict.iteritems():
a_len += len(val)
df_O2_post = df[Z:Q]
#Find count of these ranges
below2 = 0 # v <=80
middle2= 0 #v >= 81 and v<=84
above2 = 0 #v >=85 and v<=89
ls2 = []
b_dict2 = {}
m_dict2 = {}
a_dict2 = {}
for i2, v2 in df_O2_post['SpO2'].iteritems():
if v2 <= 80: #below block
if not ls2:
ls2.append(v2)
else:
if ls2[0] >= 81: #if the range before was not below 80
if len(ls2) >= 5: #if the range was greater than 10 seconds, set to 5 because data points are every 2
if ls2[0] <= 84: #was it in the middle range?
m_dict2[middle2] = ls2
middle2 += 1
ls2 = [v2]
elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?
a_dict2[above2] = ls2
above2 += 1
ls2 = [v2]
else: #old list wasn't long enough to count
ls2 = [v2]
else: #if in the same range
ls2.append(v2)
elif v2 >= 81 and v2<= 84: #middle block
if not ls2:
ls2.append(v2)
else:
if ls2[0] <= 80 or (ls2[0]>=85 and ls2[0]<= 89): #if not in the middle range
if len(ls2) >= 5: #if range was greater than 10 seconds
if ls2[0] <= 80: #was it in the below range?
b_dict2[below2] = ls2
below2 += 1
ls2 = [v2]
elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?
a_dict2[above2] = ls2
above2 += 1
ls2 = [v2]
else: #old list wasn't long enough to count
ls2 = [v2]
else:
ls2.append(v2)
elif v2 >= 85 and v2 <=89: #above block
if not ls2:
ls2.append(v2)
else:
if ls2[0] <=84 : #if not in the above range
if len(ls2) >= 5: #if range was greater than
if ls2[0] <= 80: #was it in the below range?
b_dict2[below2] = ls2
below2 += 1
ls2 = [v2]
elif ls2[0] >= 81 and ls2[0] <=84: #was it in the middle range?
m_dict2[middle2] = ls2
middle2 += 1
ls2 = [v2]
else: #old list wasn't long enough to count
ls2 = [v2]
else:
ls2.append(v2)
else: #v2>90 or something else weird. start the list over
ls2 = []
#final list check
if len(ls2) >= 5:
if ls2[0] <= 80: #was it in the below range?
b_dict2[below2] = ls2
below2 += 1
ls2= [v2]
elif ls2[0] >= 81 and ls2[0] <=84: #was it in the middle range?
m_dict2[middle2] = ls2
middle2 += 1
ls2 = [v2]
elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?
a_dict2[above2] = ls2
above2 += 1
b_len2 = 0.0
for key, val2 in b_dict2.iteritems():
b_len2 += len(val2)
m_len2 = 0.0
for key, val2 in m_dict2.iteritems():
m_len2 += len(val2)
a_len2 = 0.0
for key, val2 in a_dict2.iteritems():
a_len2 += len(val2)
#print results from count and min
print "Desat Counts for X mins\n"
print "Pre Mild Desat (85-89) Count: %s\t" %above, "for %s min" %((a_len*2)/60.)
print "Pre Mod Desat (81-84) Count: %s\t" %middle, "for %s min" %((m_len*2)/60.)
print "Pre Sev Desat (=< 80) Count: %s\t" %below, "for %s min\n" %((b_len*2)/60.)
print "Post Mild Desat (85-89) Count: %s\t" %above2, "for %s min" %((a_len2*2)/60.)
print "Post Mod Desat (81-84) Count: %s\t" %middle2, "for %s min" %((m_len2*2)/60.)
print "Post Sev Desat (=< 80) Count: %s\t" %below2, "for %s min\n" %((b_len2*2)/60.)
print "Data Recording Time!"
print '*' * 10
print "Pre-Exam Data Recording Length\t", X - Y # start of exam - first data point
print "Post-Exam Data Recording Length\t", Q - Z #last data point - end of exam
print "Total Data Recording Length\t", Q - Y #last data point - first data point
Pre = ['Pre',(X-Y)]
Post = ['Post',(Q-Z)]
Total = ['Total',(Q-Y)]
RTL = [Pre, Post, Total]
PreMild = ['Pre Mild Desats \t',(above), 'for', (a_len*2)/60., 'mins']
PreMod = ['Pre Mod Desats \t',(middle), 'for', (m_len*2)/60., 'mins']
PreSev = ['Pre Sev Desats \t',(below), 'for', (b_len*2)/60., 'mins']
PreDesats = [PreMild, PreMod, PreSev]
PostMild = ['Post Mild Desats \t',(above2), 'for', (a_len2*2)/60., 'mins']
PostMod = ['Post Mod Desats \t',(middle2), 'for', (m_len2*2)/60., 'mins']
PostSev = ['Post Sev Desats \t',(below2), 'for', (b_len2*2)/60., 'mins']
PostDesats = [PostMild, PostMod, PostSev]
#creating a list for recording time length
#did it count check sort correctly? get rid of the ''' if you want to check your values
'''
print "Mild check"
for key, val in b_dict.iteritems():
print all(i <=80 for i in val)
print "Moderate check"
for key, val in m_dict.iteritems():
print all(i >= 81 and i<=84 for i in val)
print "Severe check"
for key, val in a_dict.iteritems():
print all(i >= 85 and i<=89 for i in val)
'''
print Q
print Z
leest = [(avg0PI), 'PI Start']
print leest[:1]
import csv
class excel_tab(csv.excel):
delimiter = '\t'
csv.register_dialect("excel_tab", excel_tab)
with open('test.csv', 'w') as f: #CHANGE CSV FILE NAME, saves in same directory
writer = csv.writer(f, dialect=excel_tab)
#writer.writerow(['PI, O2, PR']) accidently found this out but using commas = gives me columns YAY! fix this
#to make code look nice ok nice
writer.writerow([leest])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Import File into Python
Step3: Set Date and Time of ROP Exam and Eye Drops
Step4: Baseline Averages
Step5: Average q 5 Min for 1 hour after 1st Eye Drops
Step6: Average Every 10 Sec During ROP Exam for first 4 minutes
Step7: Average Every 5 Mins Hour 1-2 After ROP Exam
Step8: Average Every 15 Mins Hour 2-3 After ROP Exam
Step9: Average Every 30 Mins Hour 3-4 After ROP Exam
Step10: Average Every Hour 4-24 Hours Post ROP Exam
Step11: Mild, Moderate, and Severe Desaturation Events
Step12: Export to CSV
|
13,345
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow as tf
np.random.seed(42)
tf.random.set_seed(42)
import zipfile
import io
import shutil
import os
import urllib.request
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00331/sentiment%20labelled%20sentences.zip'
# Download zip file from url.
zipdata = io.BytesIO()
zipdata.write(urllib.request.urlopen(url).read())
# Extract zip files.
zfile = zipfile.ZipFile(zipdata)
zfile.extractall()
zfile.close()
# Rename directory to "data".
shutil.rmtree('./data', ignore_errors=True)
shutil.move('sentiment labelled sentences', 'data')
os.listdir('data')
import pandas as pd
df = pd.DataFrame(columns=['review', 'label'])
for file in sorted(os.listdir('data')):
if file.endswith('_labelled.txt'):
df = df.append(pd.read_csv(os.path.join('data', file),
sep='\t',
names=['review', 'label']))
df.describe()
df.iloc[1019]['review']
import pandas as pd
df = pd.DataFrame(columns=['review', 'label'])
for file in sorted(os.listdir('data')):
if file.endswith('_labelled.txt'):
df = df.append(pd.read_csv(os.path.join('data', file),
sep='\t',
names=['review', 'label'],
quoting=3))
df.describe()
df[df['label'] == 0].sample(10)
df[df['label'] == 1].sample(10)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
df['review'], df['label'].astype('int'),
test_size=0.2, random_state=1000)
print(len(X_train), len(X_test), len(y_train), len(y_test))
from sklearn.feature_extraction.text import CountVectorizer
data = [
"Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo",
"Seattle buffalo Seattle buffalo buffalo buffalo Seattle buffalo",
]
vectorizer = CountVectorizer(data)
vectorizer.fit(data)
data_vec = vectorizer.transform(data)
print(data_vec)
data = ['Buffalo Buffalo wings']
data_vec = vectorizer.transform(data)
print(data_vec)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
vectorizer.fit(X_train)
len(vectorizer.vocabulary_)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='liblinear')
X_train_vec = vectorizer.transform(X_train)
model.fit(X_train_vec, y_train)
print('Training accuracy: {}'.format(model.score(X_train_vec, y_train)))
X_test_vec = vectorizer.transform(X_test)
print('Testing accuracy: {}'.format(model.score(X_test_vec, y_test)))
!python -m spacy download en_core_web_md
import en_core_web_md
spacy_model = en_core_web_md.load()
X_train.iloc[0]
tokens = spacy_model(X_train.iloc[0])
for token in tokens:
print(token.text, token.pos_)
def add_pos_tags(reviews_raw):
reviews = []
for i, review in enumerate(reviews_raw):
tokens = spacy_model(review)
review_with_pos = []
for token in tokens:
review_with_pos.append(token.text+"_"+token.pos_)
reviews.append(' '.join(review_with_pos))
return reviews
print(add_pos_tags("the big dog"))
X_train_annotated = add_pos_tags(X_train)
X_test_annotated = add_pos_tags(X_test)
vectorizer = CountVectorizer()
vectorizer.fit(X_train_annotated)
X_train_vec = vectorizer.transform(X_train_annotated)
X_test_vec = vectorizer.transform(X_test_annotated)
print(X_train_annotated[0])
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='liblinear')
model.fit(X_train_vec, y_train)
print('Training accuracy: {}'.format(model.score(X_train_vec, y_train)))
print('Testing accuracy: {}'.format(model.score(X_test_vec, y_test)))
from tensorflow import keras
tokenizer = keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts(X_train)
X_train_tokenized = tokenizer.texts_to_sequences(X_train)
X_test_tokenized = tokenizer.texts_to_sequences(X_test)
print(X_train.iloc[0])
print(X_train_tokenized[0])
import matplotlib.pyplot as plt
review_lengths = [len(review) for review in X_train]
plt.hist(review_lengths, density=True)
plt.show()
maxlen = 50
X_train_padded = keras.preprocessing.sequence.pad_sequences(
X_train_tokenized, padding='post', maxlen=maxlen)
X_test_padded = keras.preprocessing.sequence.pad_sequences(
X_test_tokenized, padding='post', maxlen=maxlen)
print(X_train_padded[0])
# Include an extra index for the "<PAD>" token.
vocab_size = len(tokenizer.word_index) + 1
embedding_dim = 300
embedding_matrix = np.zeros((vocab_size, embedding_dim))
for word, i in tokenizer.word_index.items():
token = spacy_model(word)[0]
# Make sure spaCy has an embedding for this token.
if not token.is_oov:
embedding_matrix[i] = token.vector
print(embedding_matrix.shape)
model = keras.Sequential([
keras.layers.Embedding(
vocab_size,
embedding_dim,
weights=[embedding_matrix],
trainable=False,
mask_zero=True
),
keras.layers.LSTM(64),
keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
history = model.fit(
X_train_padded,
y_train,
epochs=10,
batch_size=64
)
loss, acc = model.evaluate(X_test_padded, y_test)
print('Test accuracy: {}'.format(acc))
# Your code goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Natural Language Processing
Step2: Next we'll download and unzip the data.
Step3: There are three files that we'll use in our model
Step4: Interesting. We were expecting 3000 data points, but only got 2748. What's going on?
Step5: In order to get around this, we need to tell the parser to turn off quote detection using the quoting argument. The possible values are
Step6: That looks much better. We got lucky that none of the reviews had embedded tabs, or they would have been quoted and our simple fix would not have worked.
Step7: And then the good.
Step8: The sentiment seems to check out. This concludes the EDA that we'll do for this dataset. Let's move on to data preparation for the model.
Step9: The labels are simple 0 and 1 values, so we don't need to do any preprocessing there. The reviews themselves are variable length text strings. Each model will handle them slightly differently, so we'll save the model-specific preprocessing for when we encounter each model.
Step10: The resultant matrix is
Step11: Unknown words, such as 'wings' in this case, just don't appear in the matrix.
Step12: We can now transform our training data into a count vector and train a model. For a basic model, we'll use a logistic regression.
Step13: That is excellent training accuracy. Let's see how well it generalizes.
Step14: It seems like our model might have overfit a bit. With over 97% training accuracy and only 86% testing accuracy, we likely need to work on making our model generalize better.
Step15: After the model is downloaded, we can import it directly using a Python import statement. After the import we can load the model.
Step16: And now we can use spaCY to annotate our data. Let's look at one of our reviews
Step17: We can then call spaCY directly and get information such as the part of speech of each word in our review.
Step18: Many of the annotations are obvious, such as NOUN, but others are less so. The spaCY annotation documentation is a good place to look if you are unsure about an annotation.
Step19: Let's now apply this to our entire dataset.
Step20: Our training accuracy really went up, but our testing accuracy went down. We are overfitting even more now.
Step21: We need to pad our input so all vectors have the same length. A quick histogram of review lengths shows that almost all reviews have fewer than 100 words. Let's take a closer look at the distribution of lengths.
Step22: Almost all reviews have fewer than 50 words! Therefore, we will pad to a maximum review length of 50.
Step23: Pre-Trained Word Embeddings
Step24: Loading the embeddings may take a little while to run. When it's done we'll have an embedding_matrix where each word index corresponds to a 300-dimensional GloVe vector. We can load this into an Embedding layer to train a model or visualize the embeddings.
Step25: We will train this model for 10 epochs since it is slower to train per epoch and reaches high training accuracy after 10 epochs. We use a batch size of 64 based on hyperparameter tuning.
Step26: And finally, we can evaluate the accuracy of the model on our test data.
Step27: Note that the final testing set accuracy is not significantly higher than that of our Logistic Regression model. We are using a complex model on a small dataset, which is prone to overfitting. You can usually achieve more generalizable results with a larger dataset.
|
13,346
|
<ASSISTANT_TASK:>
Python Code:
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = 'train.p'
validation_file= 'valid.p'
testing_file = 'test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_validation, y_validation = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
import random
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
image_shape = np.shape(image)
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(y_train) + len(y_test)
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image)
print(y_train[index])
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic Info of the Dataset
Step2: Visualize Data
Step3: Preprocess Data
Step4: Setup TensorFlow
Step5: SOLUTION
Step6: Features and Labels
Step7: Training Pipeline
Step8: Model Evaluation
Step9: Train the Model
Step10: Evaluate the Model
|
13,347
|
<ASSISTANT_TASK:>
Python Code:
from pulp import *
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sn
#a handful of sites
sites = ['org','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P']
print(len(sites)-1)
#make some positions (so we can plot this)
positions = dict( ( a, (np.random.rand()-.5, np.random.rand()-.5)) for a in sites)
positions['org']=(0,0)
for s in positions:
p = positions[s]
plt.plot(p[0],p[1],'o')
plt.text(p[0]+.01,p[1],s,horizontalalignment='left',verticalalignment='center')
plt.gca().axis('off');
#straight line distance for simplicity
d = lambda p1,p2: np.sqrt( (p1[0]-p2[0])**2+ (p1[1]-p2[1])**2)
#calculate all the pairs
distances=dict( ((s1,s2), d(positions[s1],positions[s2])) for s1 in positions for s2 in positions if s1!=s2)
K = 4 #the number of sales people
#create the problme
prob=LpProblem("vehicle",LpMinimize)
#indicator variable if site i is connected to site j in the tour
x = LpVariable.dicts('x',distances, 0,1,LpBinary)
#dummy vars to eliminate subtours
u = LpVariable.dicts('u', sites, 0, len(sites)-1, LpInteger)
#the objective
cost = lpSum([x[(i,j)]*distances[(i,j)] for (i,j) in distances])
prob+=cost
#constraints
for k in sites:
cap = 1 if k != 'org' else K
#inbound connection
prob+= lpSum([ x[(i,k)] for i in sites if (i,k) in x]) ==cap
#outbound connection
prob+=lpSum([ x[(k,i)] for i in sites if (k,i) in x]) ==cap
#subtour elimination
N=len(sites)/K
for i in sites:
for j in sites:
if i != j and (i != 'org' and j!= 'org') and (i,j) in x:
prob += u[i] - u[j] <= (N)*(1-x[(i,j)]) - 1
%time prob.solve()
#prob.solve(GLPK_CMD(options=['--simplex']))
print(LpStatus[prob.status])
non_zero_edges = [ e for e in x if value(x[e]) != 0 ]
def get_next_site(parent):
'''helper function to get the next edge'''
edges = [e for e in non_zero_edges if e[0]==parent]
for e in edges:
non_zero_edges.remove(e)
return edges
tours = get_next_site('org')
tours = [ [e] for e in tours ]
for t in tours:
while t[-1][1] !='org':
t.append(get_next_site(t[-1][1])[-1])
for t in tours:
print(' -> '.join([ a for a,b in t]+['org']))
#draw the tours
colors = [np.random.rand(3) for i in range(len(tours))]
for t,c in zip(tours,colors):
for a,b in t:
p1,p2 = positions[a], positions[b]
plt.plot([p1[0],p2[0]],[p1[1],p2[1]], color=c)
#draw the map again
for s in positions:
p = positions[s]
plt.plot(p[0],p[1],'o')
plt.text(p[0]+.01,p[1],s,horizontalalignment='left',verticalalignment='center')
plt.gca().axis('off');
print(value(prob.objective))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. First lets make some fake data
Step2: 2. The model
Step3: Solve it!
Step4: And the result
Step5: The optimal tours
|
13,348
|
<ASSISTANT_TASK:>
Python Code:
l1 = sorted(['b', 'c', 'a']) # a list
l2 = sorted(('b', 'c', 'a')) # a tuple
l3 = sorted('bca') # a string
print(l1, l1 == l2 == l3)
print(type(l1) == type(l2) == type(l3) == list)
l = [1, 2, 3]
s = sorted(l)
print('This should be False:', id(l) == id(s))
l = [2, 3, 1]
s = sorted(l)
print(l, s, l == s) # l is unsorted, s is a new list
l.sort()
print(l, s, l == s) # Now l is sorted
l1 = [1, 2, 3]
l2 = [1, 2, 3]
print(l1 == l2, id(l1) == id(l2)) # True, False
(1, 2, 3)[2] = 4
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: sorted() always returns a new object.
Step2: It is also possible to sort lists in place with the sort() method of lists. It accepts the key and reverse parameters.
Step3: 2. Test for equality
Step4: str, tuple and list also support <, <=, etc. set does too, but the semantics are different ($\subset$, $\subseteq$, ...)
|
13,349
|
<ASSISTANT_TASK:>
Python Code:
pokemon = data[data.Generation == 1]
Image(url="http://i.giphy.com/yidUzHnBk32Um9aMMw.gif")
pokemon
# Afficher les données de Pikachu :
pokemon[pokemon.Name == 'Pikachu']
# Creer une variable de attribut de Pikachu
Pikachu = pokemon[pokemon.Name == 'Pikachu']
Pikachu
Image(url="http://i.giphy.com/xuXzcHMkuwvf2.gif")
# le plus grand nombre d'HP dans le DataFrame pokemon
pokemon['HP'].max()
# Si l'on souhaite trouver le pokemon en question il suffit de faire comme pour la sélection de Pikachu :
pokemon[pokemon['HP'] == 250]
# On peux très bien le faire de façon dynamique :
pokemon[pokemon['HP'] == pokemon['HP'].max()]
# On affiche un graphique HP/Attack :
pokemon.plot(x='HP', y='Attack', kind='scatter')
# On indique quelle mesure on désire pour l'axe "x" et "y" ainsi que le type de graphique
sns.boxplot(data=pokemon)
sns.boxplot(data=pokemon[['HP', 'Attack', 'Defense', 'Sp. Atk', 'Sp. Def','Speed']])
pokemon.describe()
pokemon['Att_Def'] = pokemon['Attack'] + pokemon['Defense']
pokemon.head()
pokemon['my_def'] = pokemon['HP'] + pokemon['Defense']
pokemon.plot(x='my_def', y='Att_Def', kind='scatter')
#pokemon.head()
sns.boxplot(pokemon.Att_Def)
pokemon[pokemon.Att_Def >= 280]
pokemon.sort_values('Att_Def', ascending=0).head(8)
Image(url="http://i.giphy.com/gPjzE0ivYud5C.gif")
sns.lmplot(x='Sp. Atk', y='Sp. Def', data=pokemon)
pokemon[(pokemon['Sp. Atk'] >= 180)]
# De façon écrite :
pokemon['Type 1'].value_counts()
# par un graphique :
sns.countplot(pokemon['Type 1'])
# On prend l'ensemble des type 1 possible en valeurs
list_type_1 = pokemon['Type 1'].unique() # Comme une liste de toutes les valeurs possible de "Type 1"
# Un peu de custom dans nos graphiques :
plt.figure(figsize=(12,8)) # Tu vas créer un graphique de taille un peu plus grand
graph = sns.countplot(pokemon['Type 1']) # Notre graphique que l'on met en variable
graph.set_xticklabels(list_type_1, rotation=20) # On dit qu'il faut faire une rotation de nos valeur "Type 1"
# Vous pouvez copier / coller le code au dessous en changeant "Type 1" -> "Type 2"
# On prend l'ensemble des type 1 possible en valeurs
list_type_2 = pokemon['Type 2'].unique() # Comme une liste de toutes les valeurs possible de "Type 1"
# Un peu de custom dans nos graphiques :
plt.figure(figsize=(12,8)); # Tu vas créer un graphique de taille un peu plus grand
graph = sns.countplot(pokemon['Type 2']) # Notre graphique que l'on met en variable
graph.set_xticklabels(list_type_2, rotation=20) # O
generation = data.groupby('Generation').mean()
generation
# Help fonction drop
#generation.drop?
generation = generation.drop(['#', 'Total', 'Legendary'], axis=1) # axis = 1 sont les colonnes / 0 sont les lignes
generation
generation.plot(figsize=(12, 8)) # figsize permet de choisir la taille du graphique
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Voila comment sélectionner des données sur un DataFrame existant
Step2: Regardons les stats de Pikachu maintenant. Pour afficher les données, on n'a pas besoin de créer une nouvelle variable. Tout dépend de l'analyse que vous voulez faire par la suite
Step3: On peut regarder quelle pokemon a le plus d'HP
Step4: Si l'on souhaite regarder plus de 2 mesures, on utilise ".boxplot()" de Seaborn
Step5: Le problème ici c'est que Total est beaucoup plus grand que les autres mesures. De plus "#" ne sert pas à grand chose non plus
Step6: On retrouve notre "Chansey" tout en haut sur la variable "HP"
Step7: Maintenant on va créer de nouvelle mesures
Step8: Un message en rouge (warning) apparait
Step9: Il y a un point particulier a plus de 280 ;)
Step10: Amuser vous à faire des graphique grâce à la fonction de Seaborn ".lmplot(...)"
Step11: Seaborn fait automatiquement une regression sur ces 2 mesures. C'est à dire qu'il essait de tiret un trait qui va minimiser l'erreur avec les points
Step12: Les points qui sont au dessus peuvent etre considérés comme supérieur et sont en dessous comme inférieur.
Step13: Le graphique est un peu trop petit et on ne peux pas distinger les types
Step14: De type 2
Step15: Utilisation de "groupby“ afin de regrouper les données sur des valeurs
Step16: On souhaite supprimer les mesures "#", "Total" et "Legendary" qui ne servent pas à grand chose ici...
|
13,350
|
<ASSISTANT_TASK:>
Python Code:
import jax
import jax.numpy as jnp
global_list = []
def log2(x):
global_list.append(x)
ln_x = jnp.log(x)
ln_2 = jnp.log(2.0)
return ln_x / ln_2
print(jax.make_jaxpr(log2)(3.0))
def log2_with_print(x):
print("printed x:", x)
ln_x = jnp.log(x)
ln_2 = jnp.log(2.0)
return ln_x / ln_2
print(jax.make_jaxpr(log2_with_print)(3.))
def log2_if_rank_2(x):
if x.ndim == 2:
ln_x = jnp.log(x)
ln_2 = jnp.log(2.0)
return ln_x / ln_2
else:
return x
print(jax.make_jaxpr(log2_if_rank_2)(jax.numpy.array([1, 2, 3])))
import jax
import jax.numpy as jnp
def selu(x, alpha=1.67, lambda_=1.05):
return lambda_ * jnp.where(x > 0, x, alpha * jnp.exp(x) - alpha)
x = jnp.arange(1000000)
%timeit selu(x).block_until_ready()
selu_jit = jax.jit(selu)
# Warm up
selu_jit(x).block_until_ready()
%timeit selu_jit(x).block_until_ready()
# Condition on value of x.
def f(x):
if x > 0:
return x
else:
return 2 * x
f_jit = jax.jit(f)
f_jit(10) # Should raise an error.
# While loop conditioned on x and n.
def g(x, n):
i = 0
while i < n:
i += 1
return x + i
g_jit = jax.jit(g)
g_jit(10, 20) # Should raise an error.
# While loop conditioned on x and n with a jitted body.
@jax.jit
def loop_body(prev_i):
return prev_i + 1
def g_inner_jitted(x, n):
i = 0
while i < n:
i = loop_body(i)
return x + i
g_inner_jitted(10, 20)
f_jit_correct = jax.jit(f, static_argnums=0)
print(f_jit_correct(10))
g_jit_correct = jax.jit(g, static_argnames=['n'])
print(g_jit_correct(10, 20))
from functools import partial
@partial(jax.jit, static_argnames=['n'])
def g_jit_decorated(x, n):
i = 0
while i < n:
i += 1
return x + i
print(g_jit_decorated(10, 20))
print("g jitted:")
%timeit g_jit_correct(10, 20).block_until_ready()
print("g:")
%timeit g(10, 20)
from functools import partial
def unjitted_loop_body(prev_i):
return prev_i + 1
def g_inner_jitted_partial(x, n):
i = 0
while i < n:
# Don't do this! each time the partial returns
# a function with different hash
i = jax.jit(partial(unjitted_loop_body))(i)
return x + i
def g_inner_jitted_lambda(x, n):
i = 0
while i < n:
# Don't do this!, lambda will also return
# a function with a different hash
i = jax.jit(lambda x: unjitted_loop_body(x))(i)
return x + i
def g_inner_jitted_normal(x, n):
i = 0
while i < n:
# this is OK, since JAX can find the
# cached, compiled function
i = jax.jit(unjitted_loop_body)(i)
return x + i
print("jit called in a loop with partials:")
%timeit g_inner_jitted_partial(10, 20).block_until_ready()
print("jit called in a loop with lambdas:")
%timeit g_inner_jitted_lambda(10, 20).block_until_ready()
print("jit called in a loop with caching:")
%timeit g_inner_jitted_normal(10, 20).block_until_ready()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Understanding Jaxprs section of the documentation provides more information on the meaning of the above output.
Step2: See how the printed x is a Traced object? That's the JAX internals at work.
Step3: JIT compiling a function
Step4: The code above is sending one operation at a time to the accelerator. This limits the ability of the XLA compiler to optimize our functions.
Step5: Here's what just happened
Step6: The problem is that we tried to condition on the value of an input to the function being jitted. The reason we can't do this is related to the fact mentioned above that jaxpr depends on the actual values used to trace it.
Step7: If we really need to JIT a function that has a condition on the value of an input, we can tell JAX to help itself to a less abstract tracer for a particular input by specifying static_argnums or static_argnames. The cost of this is that the resulting jaxpr is less flexible, so JAX will have to re-compile the function for every new value of the specified static input. It is only a good strategy if the function is guaranteed to get limited different values.
Step8: To specify such arguments when using jit as a decorator, a common pattern is to use python's functools.partial
Step9: When to use JIT
Step10: This is because jax.jit introduces some overhead itself. Therefore, it usually only saves time if the compiled function is complex and you will run it numerous times. Fortunately, this is common in machine learning, where we tend to compile a large, complicated model, then run it for millions of iterations.
|
13,351
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV, KFold, cross_val_predict
from sklearn.neighbors import KNeighborsRegressor
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
df = pd.read_fwf(url, header=None, names=['mpg', 'cylinders', 'displacement', 'horsepower', 'weight',
'acceleration', 'model year', 'origin', 'car name'])
df.head()
df = df.set_index('car name')
df.head()
df = df[df['horsepower'] != '?']
df.dtypes
df['horsepower'] = pd.to_numeric(df['horsepower'])
# Check the data types again
df.dtypes
df = pd.get_dummies(df, columns=['origin'])
df.head()
df.describe()
df.corr()
pd.plotting.scatter_matrix(df, s=50, hist_kwds={'bins': 10}, figsize=(16, 16));
KNeighborsRegressor().get_params()
X = df.drop('mpg', axis='columns') # X = features
y = df['mpg'] # y = prediction target
pipeline = make_pipeline(
StandardScaler(),
KNeighborsRegressor()
)
# Build models for different values of n_neighbors (k), distance metric and weight scheme
parameters = {
'kneighborsregressor__n_neighbors': [2, 5, 10, 15, 20],
'kneighborsregressor__metric': ['manhattan', 'euclidean'],
'kneighborsregressor__weights': ['uniform', 'distance']
}
# Use inner CV to select the best model
inner_cv = KFold(n_splits=5, shuffle=True, random_state=0) # K = 5
clf = GridSearchCV(pipeline, parameters, cv=inner_cv, n_jobs=-1) # n_jobs=-1 uses all available CPUs = faster
clf.fit(X, y)
# Use outer CV to evaluate the error of the best model
outer_cv = KFold(n_splits=10, shuffle=True, random_state=0) # K = 10, doesn't have to be the same
y_pred = cross_val_predict(clf, X, y, cv=outer_cv)
# Print the results
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the nearest neighbours regression model',
xlabel='Error'
);
clf.best_params_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, let's load the data. This week, we're going to load the Auto MPG data set, which is available online at the UC Irvine Machine Learning Repository. The dataset is in fixed width format, but fortunately this is supported out of the box by pandas' read_fwf function
Step2: Exploratory data analysis
Step3: As the car name is unique for each instance (according to the dataset documentation), it cannot be used to predict the MPG by itself so let's drop it as a feature and use it as the index instead
Step4: According to the documentation, the horsepower column contains a small number of missing values, each of which is denoted by the string '?'. Again, for simplicity, let's just drop these from the data set
Step5: Usually, pandas is smart enough to recognise that a column is numeric and will convert it to the appropriate data type automatically. However, in this case, because there were strings present initially, the value type of the horsepower column isn't numeric
Step6: We can correct this by converting the column values numbers manually, using pandas' to_numeric function
Step7: As can be seen, the data type of the horsepower column is now float64, i.e. a 64 bit floating point value.
Step8: As can be seen, one hot encoding converts the origin column into separate binary columns, each representing the presence or absence of the given category. Because we're going to use a decsion tree regression model, we don't need to worry about the effects of multicollinearity, and so there's no need to drop one of the encoded variable columns as we did in the case of linear regression.
Step9: Print a matrix of pairwise Pearson correlation values
Step10: Let's also create a scatter plot matrix
Step11: Based on the above information, we can conclude the following
Step12: You can find a more detailed description of each parameter in the scikit-learn documentation.
Step13: Our nearest neighbors regression model predicts the MPG with an average error of approximately ±1.99 with a standard deviation of 2.85, which is better than our final linear regression model from Lab 06 and comparable to our random forest regression model from Lab 07b. It's also worth noting that we were able to achieve this level of accuracy with very little feature engineering effort (albeit a little more than with decision tree regression). This is because the nearest neighbours algorithm does not rely on the same set of assumptions (e.g. linearity) as linear regression, and so is able to learn from data with less manual tuning.
|
13,352
|
<ASSISTANT_TASK:>
Python Code:
def get_closest_vowel(word):
if len(word) < 3:
return ""
vowels = {"a", "e", "i", "o", "u", "A", "E", 'O', 'U', 'I'}
for i in range(len(word)-2, 0, -1):
if word[i] in vowels:
if (word[i+1] not in vowels) and (word[i-1] not in vowels):
return word[i]
return ""
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
13,353
|
<ASSISTANT_TASK:>
Python Code:
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG -q
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG -q
! pip3 install tensorflow-hub $USER_FLAG -q
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_URI
! gsutil ls -al $BUCKET_URI
import google.cloud.aiplatform as aip
import tensorflow as tf
import tensorflow_hub as hub
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", DEPLOY_COMPUTE)
! gcloud services enable artifactregistry.googleapis.com
PRIVATE_REPO = "my-docker-repo"
! gcloud artifacts repositories create {PRIVATE_REPO} --repository-format=docker --location={REGION} --description="Docker repository"
! gcloud artifacts repositories list
! gcloud auth configure-docker {REGION}-docker.pkg.dev --quiet
# Executes in Vertex AI Workbench
if DEPLOY_GPU:
DEPLOY_IMAGE = (
f"{REGION}-docker.pkg.dev/"
+ PROJECT_ID
+ f"/{PRIVATE_REPO}"
+ "/tf_serving:gpu"
)
TF_IMAGE = "tensorflow/serving:latest-gpu"
else:
DEPLOY_IMAGE = (
f"{REGION}-docker.pkg.dev/" + PROJECT_ID + f"/{PRIVATE_REPO}" + "/tf_serving"
)
TF_IMAGE = "tensorflow/serving:latest"
if not IS_COLAB:
if DEPLOY_GPU:
! sudo docker pull tensorflow/serving:latest-gpu
else:
! sudo docker pull tensorflow/serving:latest
! sudo docker tag tensorflow/serving $DEPLOY_IMAGE
! sudo docker push $DEPLOY_IMAGE
else:
# install docker daemon
! apt-get -qq install docker.io
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
%%bash -s $IS_COLAB $DEPLOY_IMAGE $TF_IMAGE
if [ $1 == "False" ]; then
exit 0
fi
set -x
dockerd -b none --iptables=0 -l warn &
for i in $(seq 5); do [ ! -S "/var/run/docker.sock" ] && sleep 2 || break; done
docker pull $3
docker tag tensorflow/serving $2
docker push $2
kill $(jobs -p)
tfhub_model = tf.keras.Sequential(
[hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/5")]
)
tfhub_model.build([None, 224, 224, 3])
tfhub_model.summary()
MODEL_DIR = BUCKET_URI + "/model/1"
tfhub_model.save(MODEL_DIR)
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(224, 224))
return resized
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(tfhub_model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 224, 224, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(tfhub_model, MODEL_DIR, signatures={"serving_default": serving_fn})
loaded = tf.saved_model.load(MODEL_DIR)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
MODEL_NAME = "example_" + TIMESTAMP
model = aip.Model.upload(
display_name="example_" + TIMESTAMP,
artifact_uri=MODEL_DIR[:-2],
serving_container_image_uri=DEPLOY_IMAGE,
serving_container_health_route="/v1/models/" + MODEL_NAME,
serving_container_predict_route="/v1/models/" + MODEL_NAME + ":predict",
serving_container_command=["/usr/bin/tensorflow_model_server"],
serving_container_args=[
"--model_name=" + MODEL_NAME,
"--model_base_path=" + "$(AIP_STORAGE_URI)",
"--rest_api_port=8080",
"--port=8500",
"--file_system_poll_wait_seconds=31540000",
],
serving_container_ports=[8080],
)
print(model)
endpoint = aip.Endpoint.create(
display_name="example_" + TIMESTAMP,
project=PROJECT_ID,
location=REGION,
labels={"your_key": "your_value"},
)
print(endpoint)
response = endpoint.deploy(
model=model,
deployed_model_display_name="example_" + TIMESTAMP,
machine_type=DEPLOY_COMPUTE,
)
print(endpoint)
! gsutil cp gs://cloud-ml-data/img/flower_photos/daisy/100080576_f52e8ee070_n.jpg test.jpg
import base64
with open("test.jpg", "rb") as f:
data = f.read()
b64str = base64.b64encode(data).decode("utf-8")
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{serving_input: {"b64": b64str}}]
prediction = endpoint.predict(instances=instances)
print(prediction)
delete_bucket = False
delete_model = True
delete_endpoint = True
if delete_endpoint:
try:
endpoint.undeploy_all()
endpoint.delete()
except Exception as e:
print(e)
if delete_model:
try:
model.delete()
except Exception as e:
print(e)
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -rf {BUCKET_URI}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the kernel
Step2: Before you begin
Step3: Region
Step4: Timestamp
Step5: Authenticate your Google Cloud account
Step6: Create a Cloud Storage bucket
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Set up variables
Step10: Initialize Vertex AI SDK for Python
Step11: Set hardware accelerators
Step12: Set machine type
Step13: Enable Artifact Registry API
Step14: Create a private Docker repository
Step15: Configure authentication to your private repo
Step16: Container (Docker) image for serving
Step17: Executes in Colab
Step18: Get pretrained model from TensorFlow Hub
Step19: Save the model artifacts
Step20: Upload the model for serving
Step21: Get the serving function signature
Step22: Upload the TensorFlow Hub model to a Vertex AI Model resource
Step23: Creating an Endpoint resource
Step24: Deploying Model resources to an Endpoint resource.
Step25: Prepare test data for prediction
Step26: Make the prediction
Step27: Cleaning up
|
13,354
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-2', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
13,355
|
<ASSISTANT_TASK:>
Python Code:
# Imports for plotting
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import numpy as np
sns.set_style('darkgrid')
mb_solve_json =
{
"atom": {
"decays": [
{
"channels": [[0, 1]],
"rate": 1.0
}
],
"fields": [
{
"coupled_levels": [[0, 1]],
"rabi_freq": 1.0e-3,
"rabi_freq_t_args": {
"ampl": 1.0,
"centre": 0.0,
"fwhm": 1.0
},
"rabi_freq_t_func": "gaussian"
}
],
"num_states": 2
},
"t_min": -2.0,
"t_max": 10.0,
"t_steps": 1000,
"z_min": -0.2,
"z_max": 1.2,
"z_steps": 50,
"interaction_strengths": [
1.0
],
"velocity_classes": {
"thermal_delta_min": -0.1,
"thermal_delta_max": 0.1,
"thermal_delta_steps": 4,
"thermal_width": 0.05
},
"savefile": "velocity-classes"
}
from maxwellbloch import mb_solve
mbs = mb_solve.MBSolve().from_json_str(mb_solve_json)
mbs.thermal_delta_list/(2*np.pi)
mbs.thermal_weights
maxboltz = mb_solve.maxwell_boltzmann(mbs.thermal_delta_list,
2*np.pi*mbs.velocity_classes['thermal_width'])
plt.plot(mbs.thermal_delta_list, maxboltz, marker='o');
np.trapz(mbs.thermal_weights, mbs.thermal_delta_list)
Omegas_zt, states_zt = mbs.mbsolve(recalc=False)
fig = plt.figure(1, figsize=(16, 6))
ax = fig.add_subplot(111)
cmap_range = np.linspace(0.0, 1.0e-3, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[0]/(2*np.pi)),
cmap_range, cmap=plt.cm.Blues)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.colorbar(cf);
from maxwellbloch import spectral, utility
interaction_strength = mbs.interaction_strengths[0]
decay_rate = mbs.atom.decays[0]['rate']
freq_list = spectral.freq_list(mbs)
absorption_linear_known = spectral.absorption_two_linear_known(freq_list,
interaction_strength, decay_rate)
dispersion_linear_known = spectral.dispersion_two_linear_known(freq_list,
interaction_strength, decay_rate)
fig = plt.figure(4, figsize=(16, 6))
ax = fig.add_subplot(111)
pal = sns.color_palette('deep')
ax.plot(freq_list, spectral.absorption(mbs, 0, -1),
label='Absorption', lw=5.0, c=pal[0])
ax.plot(freq_list, spectral.dispersion(mbs, 0, -1),
label='Dispersion', lw=5.0, c=pal[1])
ax.plot(freq_list, absorption_linear_known, ls='dotted', c=pal[0], lw=2.0, label='Absorption, No Thermal')
ax.plot(freq_list, dispersion_linear_known, ls='dotted', c=pal[1], lw=2.0, label='Dispersion, No Thermal')
# Widths
hm, r1, r2 = utility.half_max_roots(freq_list, spectral.absorption(mbs, field_idx=0))
plt.hlines(y=hm, xmin=r1, xmax=r2, linestyle='dotted', color=pal[0])
plt.annotate('FWHM: ' + '%0.2f'%(r2 - r1), xy=(r2, hm), color=pal[0],
xycoords='data', xytext=(5, 5), textcoords='offset points');
voigt = spectral.voigt_two_linear_known(freq_list, 1.0, 0.05).imag
ax.plot(freq_list, voigt, c='white',
ls='dashed', lw=2.0, label='Known Absorption, Voigt Profile')
ax.set_xlim(-3.0, 3.0)
ax.set_ylim(-1.0, 1.0)
ax.set_xlabel('Frequency ($\Gamma$)')
ax.legend();
# Plot residuals
fig = plt.figure(figsize=(16, 2))
ax = fig.add_subplot(111)
ax.plot(freq_list, spectral.absorption(mbs, 0, -1) - voigt,
label='Absorption', lw=2.0, c=pal[0])
ax.set_xlim(-3.0, 3.0)
ax.set_ylim(-3e-2, 3e-2)
ax.set_xlabel('Frequency ($\Gamma$)');
vc = {
"thermal_delta_min": -0.1,
"thermal_delta_max": 0.1,
"thermal_delta_steps": 4,
"thermal_delta_inner_min": -0.05,
"thermal_delta_inner_max": 0.05,
"thermal_delta_inner_steps": 10,
"thermal_width": 0.05
}
mbs.build_velocity_classes(velocity_classes=vc)
print(mbs.thermal_delta_list/(2*np.pi))
maxboltz = mb_solve.maxwell_boltzmann(mbs.thermal_delta_list, 2*np.pi*mbs.velocity_classes['thermal_width'])
plt.plot(mbs.thermal_delta_list, maxboltz, marker='o');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Velocity Classes for Modelling Doppler Broadening in Thermal Systems
Step2: We can check the set of velocity classes we've defined
Step3: The weights of the Maxwell-Boltzmann distribution at these deltas is given by
Step4: And so we can plot the numerical approximation to the Gaussian Maxwell-Boltzmann distribution
Step5: It is useful to look at what the numerical integration looks like for these velocity classes. If it is close to 1, the thermal distribution should be well covered.
Step6: Now we can solve as before. Now at each $z$-step, the system will be solved thermal_delta_steps times, once for each velocity class, and so the time taken to solve scales linearly.
Step7: Results in the Time Domain
Step8: Results in the Frequency Domain
Step9: Adding Inner Steps
Step10: So the thermal delta range is $[-0.1, -0.05, 0.05, 1.0]$ and the inner range is $[-0.05, -0.04, \dots, 0.04, 0.05]$. These are combined to form
|
13,356
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-2', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
13,357
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-2', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
13,358
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import sympy as sym
import quantecon as qe
import solowpy
import pypwt
pwt = pypwt.load_pwt_data()
fig, ax = plt.subplots(1, 1, figsize=(8,6))
for ctry in pwt.major_axis:
tmp_data = pwt.major_xs(ctry)
tmp_data.labsh.plot(color='gray', alpha=0.5)
# plot some specific countries
pwt.major_xs('USA').labsh.plot(color='blue', ax=ax, label='USA')
pwt.major_xs('IND').labsh.plot(color='green', ax=ax, label='IND')
pwt.major_xs('CHN').labsh.plot(color='orange', ax=ax, label='CHN')
# plot global average
avg_labor_share = pwt.labsh.mean(axis=0)
avg_labor_share.plot(color='r', ax=ax)
ax.set_title("Labor's share has been far from constant!",
fontsize=20, family='serif')
ax.set_xlabel('Year', family='serif', fontsize=15)
ax.set_ylabel('Labor share of income', family='serif', fontsize=15)
ax.set_ylim(0, 1)
plt.show()
solowpy.Model?
# define model variables
A, K, L = sym.symbols('A, K, L')
# define production parameters
alpha, sigma = sym.symbols('alpha, sigma')
# specify some production function
rho = (sigma - 1) / sigma
ces_output = (alpha * K**rho + (1 - alpha) * (A * L)**rho)**(1 / rho)
# define model parameters
ces_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.02, 'n': 0.03, 's': 0.15,
'delta': 0.05, 'alpha': 0.33, 'sigma': 0.95}
# create an instance of the solow.Model class
ces_model = solowpy.CESModel(params=ces_params)
solowpy.Model.find_steady_state?
k_star, result = ces_model.find_steady_state(1e-6, 1e6, method='bisect', full_output=True)
print("The steady-state value is {}".format(k_star))
print("Did the bisection algorithm coverge? {}".format(result.converged))
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ces_model.plot_solow_diagram(ax)
fig.show()
from IPython.html.widgets import fixed, interact, FloatSliderWidget
# wrap the static plotting code in a function
def interactive_solow_diagram(model, **params):
Interactive widget for the Solow diagram.
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
model.plot_solow_diagram(ax, Nk=1000, **params)
# define some widgets for the various parameters
eps = 1e-2
technology_progress_widget = FloatSliderWidget(min=-0.05, max=0.05, step=eps, value=0.02)
population_growth_widget = FloatSliderWidget(min=-0.05, max=0.05, step=eps, value=0.02)
savings_widget = FloatSliderWidget(min=eps, max=1-eps, step=eps, value=0.5)
output_elasticity_widget = FloatSliderWidget(min=eps, max=1.0, step=eps, value=0.5)
depreciation_widget = FloatSliderWidget(min=eps, max=1-eps, step=eps, value=0.5)
elasticity_substitution_widget = FloatSliderWidget(min=eps, max=10.0, step=0.01, value=1.0+eps)
# create the widget!
interact(interactive_solow_diagram,
model=fixed(ces_model),
g=technology_progress_widget,
n=population_growth_widget,
s=savings_widget,
alpha=output_elasticity_widget,
delta=depreciation_widget,
sigma=elasticity_substitution_widget,
)
ces_model.ivp?
ces_model.ivp.solve?
# need to specify some initial conditions
t0, k0 = 0.0, 0.5
numeric_soln = ces_model.ivp.solve(t0, k0, T=100, integrator='dopri5')
fig, ax = plt.subplots(1, 1, figsize=(8,6))
# plot the finite-difference-approximation
ax.plot(numeric_soln[:,0], numeric_soln[:,1], 'bo', markersize=3.0)
# equilibrium value of capital stock (per unit effective labor)
k_star = ces_model.steady_state
ax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=15, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('Finite-difference approximation',
fontsize=20, family='serif')
ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
ax.grid('on')
plt.show()
ces_model.ivp.interpolate?
# interpolate!
ti = np.linspace(0, 100, 1000)
interpolated_soln = ces_model.ivp.interpolate(numeric_soln, ti, k=3)
fig, ax = plt.subplots(1, 1, figsize=(8,6))
# plot the interpolated and finite difference approximations
ax.plot(ti, interpolated_soln[:,1], 'r-')
ax.plot(numeric_soln[:,0], numeric_soln[:,1], 'bo', markersize=3.0)
# equilibrium value of capital stock (per unit effective labor)
k_star = ces_model.steady_state
ax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=15, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('B-spline approximation of the solution',
fontsize=20, family='serif')
ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
plt.show()
# compute the residual...
ti = np.linspace(0, 100, 1000)
residual = ces_model.ivp.compute_residual(numeric_soln, ti, k=3)
# extract the raw residuals
capital_residual = residual[:, 1]
# typically, normalize residual by the level of the variable
norm_capital_residual = np.abs(capital_residual) / interpolated_soln[:,1]
# create the plot
fig = plt.figure(figsize=(8, 6))
plt.plot(interpolated_soln[:,1], norm_capital_residual, 'b-', label='$k(t)$')
plt.axhline(np.finfo('float').eps, linestyle='dashed', color='k', label='Machine eps')
plt.xlabel('Capital (per unit effective labor), $k$', fontsize=15, family='serif')
plt.ylim(1e-16, 1)
plt.ylabel('Residuals (normalized)', fontsize=15, family='serif')
plt.yscale('log')
plt.title('Residual', fontsize=20, family='serif')
plt.legend(loc=0, frameon=False, bbox_to_anchor=(1.0,1.0))
plt.show()
# 50% increase in the current savings rate...
ces_model.irf.impulse = {'s': 1.5 * ces_model.params['s']}
# in efficiency units...
ces_model.irf.kind = 'efficiency_units'
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ces_model.irf.plot_impulse_response(ax, variable='output')
plt.show()
def interactive_impulse_response(model, shock, param, variable, kind, log_scale):
Interactive impulse response plotting tool.
# specify the impulse response
model.irf.impulse = {param: shock * model.params[param]}
model.irf.kind = kind
# create the plot
fig, ax = plt.subplots(1, 1, figsize=(8,6))
model.irf.plot_impulse_response(ax, variable=variable, log=log_scale)
irf_widget = interact(interactive_impulse_response,
model=fixed(ces_model),
shock = FloatSliderWidget(min=0.1, max=5.0, step=0.1, value=0.5),
param = ['g', 'n', 's', 'alpha', 'delta' , 'sigma'],
variable=['capital', 'output', 'consumption', 'investment'],
kind=['efficiency_units', 'per_capita', 'levels'],
log_scale=False,
)
def awesome_interactive_plot(model, iso3_code, **params):
Interactive widget for the my awesome plot.
# extract the relevant data
tmp_data = pwt.major_xs(iso3_code)
actual_labor_share = tmp_data.labsh.values
actual_capital_share = 1 - tmp_data.labsh
output = tmp_data.rgdpna
capital = tmp_data.rkna
labor = tmp_data.emp
# need to update params
model.params.update(params)
# get new initial condition
implied_technology = model.evaluate_solow_residual(output, capital, labor)
k0 = tmp_data.rkna[0] / (implied_technology[0] * labor[0])
# finite difference approximation
T = actual_labor_share.size
soln = model.ivp.solve(t0, k0, T=T, integrator='dopri5')
# get predicted labor share
predicted_capital_share = model.evaluate_output_elasticity(soln[:,1])
predicted_labor_share = 1 - predicted_capital_share
# get predicted output per unit labor
predicted_intensive_output = model.evaluate_intensive_output(soln[:,1])
technology = implied_technology[0] * np.exp(ces_model.params['g'] * soln[:,0])
predicted_output_per_unit_labor = predicted_intensive_output * technology
# make the plots!
fig, axes = plt.subplots(1, 2, figsize=(12,6))
axes[0].plot(soln[:,0], predicted_labor_share, 'b')
axes[0].plot(soln[:,0], predicted_capital_share, 'g')
axes[0].plot(actual_labor_share)
axes[0].plot(actual_capital_share)
axes[0].set_xlabel('Time, $t$', fontsize=15, family='serif')
axes[0].set_ylim(0, 1)
axes[0].set_title('Labor share of income in {}'.format(iso3_code),
fontsize=20, family='serif')
axes[0].legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
axes[1].set_xlabel('Time, $t$', fontsize=15, family='serif')
axes[1].set_title('Growth rate of Y/L in {}'.format(iso3_code),
fontsize=20, family='serif')
axes[1].legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
axes[1].plot(soln[1:,0], np.diff(np.log(predicted_output_per_unit_labor)),
'b', markersize=3.0)
axes[1].plot(np.log(output / labor).diff().values)
# define some widgets for the various parameters
technology_progress_widget = FloatSliderWidget(min=-0.05, max=0.05, step=5e-3, value=0.01)
population_growth_widget = FloatSliderWidget(min=-0.05, max=0.05, step=5e-3, value=0.01)
savings_widget = FloatSliderWidget(min=eps, max=1-eps, step=5e-3, value=0.2)
output_elasticity_widget = FloatSliderWidget(min=eps, max=1.0, step=5e-3, value=0.15)
depreciation_widget = FloatSliderWidget(min=eps, max=1-eps, step=5e-3, value=0.02)
elasticity_substitution_widget = FloatSliderWidget(min=eps, max=10.0, step=1e-2, value=2.0+eps)
# create the widget!
interact(awesome_interactive_plot,
model=fixed(ces_model),
iso3_code='USA',
g=technology_progress_widget,
n=population_growth_widget,
s=savings_widget,
alpha=output_elasticity_widget,
delta=depreciation_widget,
sigma=elasticity_substitution_widget,
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0. Motivation
Step2: From the above figure it is clear that the prediction of constant factor shares is strongly at odds with the empirical data for most countries. Labor's share of real GDP has been declining, on average, for much of the post-war period. For many countries, such as India, China, and South Korea, the fall in labor's share has been dramatic. Note also that the observed trends in factor shares are inconsistent with an economy being on its long-run balanced growth path.
Step3: From the docsting we see that in order to create an instance of the model we need to specify two primitives
Step4: More details on on how to create instances of the solow.Model class can be found in the Getting started notebook in the solowPy repository.
Step5: More details on on how to the various methods of the solow.Model class for finding the model's steady state can be found in the accompanying Finding the steady state notebook in the solowPy repository.
Step7: Interactive example
Step8: There are number of additional plotting methods available (all of which can be turned into interactive plots using IPython widgets). See the Graphical analysis notebook in the solowPy repository.
Step9: ...meaning that we can solve this initial value problem by applying the solve method of the ivp attribute!
Step10: We can plot the finite-difference approximation of the solution as follows...
Step11: Finite-difference methods only return a discrete approximation to the continuous function $k(t)$. To get a continouous approximation of the solution we can combined finite-difference methods with B-spline interpolation using the interpolate method of the ivp attribute.
Step12: We can graphically compare the discrete, finite-difference approximation with the continuous, B-spline approximation as follows.
Step13: Accuracy of our numerical methods
Step14: We can then plot the residual as follows. Our approximation is accurate so long as the residual is everywhere "close" to zero.
Step15: For more details behind the numerical methods used in this section see the the Solving the Solow model notebook in the solowPy repository.
Step17: Example
Step19: For more details and examples see the accompanying Impulse response function notebook in the solowPy repository.
|
13,359
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from keras.models import Model
from keras.layers import Dense, Activation, Embedding
from keras.layers import LSTM, Input
from keras.layers.merge import concatenate
from keras.optimizers import RMSprop, Adam
from keras.utils.data_utils import get_file
from keras.layers.normalization import BatchNormalization
from keras.callbacks import Callback, ModelCheckpoint
from sklearn.decomposition import PCA
from keras.utils import plot_model
import numpy as np
import random
import sys
import csv
import os
import h5py
import time
embeddings_path = "./glove.840B.300d-char.txt" # http://nlp.stanford.edu/data/glove.840B.300d.zip
embedding_dim = 300
batch_size = 32
use_pca = False
lr = 0.001
lr_decay = 1e-4
maxlen = 300
consume_less = 2 # 0 for cpu, 2 for gpu
text = open('./Alice.txt').read()
print('corpus length:', len(text))
chars = sorted(list(set(text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
# cut the text in semi-redundant sequences of maxlen characters
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))
print('Vectorization...')
X = np.zeros((len(sentences), maxlen), dtype=np.int)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t] = char_indices[char]
y[i, char_indices[next_chars[i]]] = 1
# test code to sample on 10% for functional model testing
def random_subset(X, y, p=0.1):
idx = np.random.randint(X.shape[0], size=int(X.shape[0] * p))
X = X[idx, :]
y = y[idx]
return (X, y)
# https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html
def generate_embedding_matrix(embeddings_path):
print('Processing pretrained character embeds...')
embedding_vectors = {}
with open(embeddings_path, 'r') as f:
for line in f:
line_split = line.strip().split(" ")
vec = np.array(line_split[1:], dtype=float)
char = line_split[0]
embedding_vectors[char] = vec
embedding_matrix = np.zeros((len(chars), 300))
#embedding_matrix = np.random.uniform(-1, 1, (len(chars), 300))
for char, i in char_indices.items():
#print ("{}, {}".format(char, i))
embedding_vector = embedding_vectors.get(char)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
# Use PCA from sklearn to reduce 300D -> 50D
if use_pca:
pca = PCA(n_components=embedding_dim)
pca.fit(embedding_matrix)
embedding_matrix_pca = np.array(pca.transform(embedding_matrix))
embedding_matrix_result = embedding_matrix_pca
print (embedding_matrix_pca)
print (embedding_matrix_pca.shape)
else:
embedding_matrix_result = embedding_matrix
return embedding_matrix_result
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds + 1e-6) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
print('Build model...')
main_input = Input(shape=(maxlen,))
embedding_matrix = generate_embedding_matrix(embeddings_path)
embedding_layer = Embedding(
len(chars), embedding_dim, input_length=maxlen,
weights=[embedding_matrix])
# embedding_layer = Embedding(
# len(chars), embedding_dim, input_length=maxlen)
embedded = embedding_layer(main_input)
# RNN Layer
rnn = LSTM(256, implementation=consume_less)(embedded)
aux_output = Dense(len(chars))(rnn)
aux_output = Activation('softmax', name='aux_out')(aux_output)
# Hidden Layers
hidden_1 = Dense(512, use_bias=False)(rnn)
hidden_1 = BatchNormalization()(hidden_1)
hidden_1 = Activation('relu')(hidden_1)
hidden_2 = Dense(256, use_bias=False)(hidden_1)
hidden_2 = BatchNormalization()(hidden_2)
hidden_2 = Activation('relu')(hidden_2)
main_output = Dense(len(chars))(hidden_2)
main_output = Activation('softmax', name='main_out')(main_output)
model = Model(inputs=main_input, outputs=[main_output, aux_output])
optimizer = Adam(lr=lr, decay=lr_decay)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer, loss_weights=[1., 0.2])
model.summary()
#plot_model(model, to_file='model.png', show_shapes=True)
if not os.path.exists('./output'):
os.makedirs('./output')
f = open('./log.csv', 'w')
log_writer = csv.writer(f)
log_writer.writerow(['iteration', 'batch', 'batch_loss',
'epoch_loss', 'elapsed_time'])
checkpointer = ModelCheckpoint(
"./output/model.hdf5", monitor='main_out_loss', save_best_only=True)
class BatchLossLogger(Callback):
def on_epoch_begin(self, epoch, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('main_out_loss'))
if batch % 50 == 0:
log_writer.writerow([iteration, batch,
logs.get('main_out_loss'),
np.mean(self.losses),
round(time.time() - start_time, 2)])
ep = 1
start_time = time.time()
for iteration in range(1, 20):
print()
print('-' * 50)
print('Iteration', iteration)
logger = BatchLossLogger()
# X_train, y_train = random_subset(X, y)
# history = model.fit(X_train, [y_train, y_train], batch_size=batch_size,
# epochs=1, callbacks=[logger, checkpointer])
history = model.fit(X, [y, y], batch_size=batch_size,
epochs=ep, callbacks=[logger, checkpointer])
loss = str(history.history['main_out_loss'][-1]).replace(".", "_")
f2 = open('./output/iter-{:02}-{:.6}.txt'.format(iteration, loss), 'w')
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print()
print('----- diversity:', diversity)
f2.write('----- diversity:' + ' ' + str(diversity) + '\n')
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
f2.write('----- Generating with seed: "' + sentence + '"' + '\n---\n')
sys.stdout.write(generated)
for i in range(1200):
x = np.zeros((1, maxlen), dtype=np.int)
for t, char in enumerate(sentence):
x[0, t] = char_indices[char]
preds = model.predict(x, verbose=0)[0][0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
f2.write(generated + '\n')
print()
f2.close()
# Write embeddings for current characters to file
# The second layer has the embeddings.
embedding_weights = model.layers[1].get_weights()[0]
f3 = open('./output/char-embeddings.txt', 'w')
for char in char_indices:
if ord(char) < 128:
embed_vector = embedding_weights[char_indices[char], :]
f3.write(char + " " + " ".join(str(x)
for x in embed_vector) + "\n")
f3.close()
f.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setting params for model setup and build.
Step2: Loading and reading Alice.txt corpus, saving characters (unique alphabet and punctuation characters in corpus) in array, and making dictionary associating each character with it's position in the character array (making two dictionaries where the key and position are either the key or value)
Step3: Cutting the document into semi-redundant sentences, where each element in the sentences list contain 40 sentences that overlap with the previous element's sentences (also doing a step size of 3 through each line in the text). Also, storing character in each next_chars array's elements, where the current element is the 40th character after the previous character.
Step4: Making X boolean (false) array with a shape of the length of the sentences by the step (40) by the length of the unique characters/punctuation in the document.
Step5: Defining helper functions.
Step6: Building text embedding matrix and RNN model. This is what differentiates this tutorial from tutorial 03.
Step7: Making batchloss class for more efficient epoch training and writing.
Step8: Model training. Use one epoch instead of ten.
|
13,360
|
<ASSISTANT_TASK:>
Python Code:
from rmtk.vulnerability.derivation_fragility.R_mu_T_dispersion.SPO2IDA import SPO2IDA_procedure
from rmtk.vulnerability.common import utils
%matplotlib inline
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Vb-dfloor.csv"
input_spectrum = "../../../../../../rmtk_data/FEMAP965spectrum.txt"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
Sa_ratios = utils.get_spectral_ratios(capacity_curves, input_spectrum)
utils.plot_capacity_curves(capacity_curves)
idealised_type = "quadrilinear"
idealised_capacity = utils.idealisation(idealised_type, capacity_curves)
utils.plot_idealised_capacity(idealised_capacity, capacity_curves, idealised_type)
damage_model_file = "../../../../../../rmtk_data/damage_model_ISD.csv"
damage_model = utils.read_damage_model(damage_model_file)
montecarlo_samples = 50
fragility_model = SPO2IDA_procedure.calculate_fragility(capacity_curves, idealised_capacity, damage_model, montecarlo_samples, Sa_ratios, 1)
minIML, maxIML = 0.01, 2
utils.plot_fragility_model(fragility_model, minIML, maxIML)
print fragility_model
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
utils.plot_vulnerability_model(vulnerability_model)
taxonomy = "RC"
output_type = "nrml"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load capacity curves
Step2: Idealise pushover curves
Step3: Load damage state thresholds
Step4: Calculate fragility functions
Step5: Plot fragility functions
Step6: Save fragility functions
Step7: Obtain vulnerability function
Step8: Plot vulnerability function
Step9: Save vulnerability function
|
13,361
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# Simulate a dataset containing one feature (and one target)
# The feature values are contained in X
# The target values are contained in y
def make_data(N=100, err=0.8, rseed=1):
# randomly sample the data
rng = np.random.RandomState(rseed)
X = rng.rand(N, 1) ** 2
y = 10 - 1. / (X.ravel() + 0.1)
if err > 0:
y += err * rng.randn(N)
return X, y
# For plotting
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
# For building models and measuring their performance
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.metrics import mean_squared_error as mse
from sklearn.model_selection import train_test_split as tt_split
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
# Create the complete dataset - total size = N
X, y = make_data(N=100, rseed=42)
# Split the dataset into training and test data
X_train, X_test, y_train, y_test = tt_split(X, y, test_size=0.2, random_state=42)
# Keep the test data the same throughout
# Get the first num_datapoints from the training dataset
def get_training_slice(num_datapoints):
xTrainSlice = X_train[0:num_datapoints]
yTrainSlice = y_train[0:num_datapoints]
return xTrainSlice, yTrainSlice
# Test it out
xTrainSlice, yTrainSlice = get_training_slice(3)
xTrainSlice, yTrainSlice
# Train a model with num_datapoints rows of the training dataset
def train_models(num_datapoints):
# Get the slice of the training data on which the model is trained
xTrainSlice, yTrainSlice = get_training_slice(num_datapoints)
# Train the model on the slice of training data
# The high bias model is a polynomial of degree 1
model_hb = PolynomialRegression(1).fit(xTrainSlice, yTrainSlice)
# The high variance model is a polynomial of degree 20
model_hv = PolynomialRegression(20).fit(xTrainSlice, yTrainSlice)
return xTrainSlice, yTrainSlice, model_hb, model_hv
# Test it out
xTrainSlice, yTrainSlice, model_hb, model_hv = train_models(10)
xTrainSlice, yTrainSlice, model_hb, model_hv;
# Quick Check on Performance
# MSE for the high-bias model
print("MSE for the high-bias model: {}".format(mse(y_test, model_hb.predict(X_test))))
# MSE for the high-variance model
print("MSE for the high-variance model: {}".format(mse(y_test, model_hv.predict(X_test))))
# For models trained on num_datapoints rows of the training dataset, show fit and performance
# on the rows of the training dataset and the entire test dataset
def plot_prediction_performance(num_datapoints):
xTrainSlice, yTrainSlice, model_hb, model_hv = train_models(num_datapoints)
# A set of points on the x axis for plotting purposes
xfit = np.linspace(-0.1, 1.0, 1000)[:, None]
fig, ax = plt.subplots(1, 2, figsize=(16, 8))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
# High-Bias Model: Scatter plot of the training dataset slice
ax[0].scatter(xTrainSlice.ravel(), yTrainSlice, s=40, c='blue')
# Plot of the predictions trained on the training dataset slice
ax[0].plot(xfit.ravel(), model_hb.predict(xfit), color='red')
ax[0].axis([-0.1, 1.0, -2, 14])
ax[0].set_title('High-Bias Model\n' + '# Training Datapoints = ' + str(num_datapoints) + "\n" + '# Test Datapoints = ' + str(len(y_test)), size=14)
ax[0].scatter(X_test.ravel(), y_test, s=40, c='green')
ax[0].text(0.02, 0.98, "training data $MSE$ = {0:.2f}".format(mse(yTrainSlice, model_hb.predict(xTrainSlice))),ha='left', va='top', transform=ax[0].transAxes, size=14, color='blue')
ax[0].text(0.02, 0.91, "test data $MSE$ = {0:.2f}".format(mse(y_test, model_hb.predict(X_test))),ha='left', va='top', transform=ax[0].transAxes, size=14, color='green')
# High-Variance Model: Scatter plot of the training dataset slice
ax[1].scatter(xTrainSlice.ravel(), yTrainSlice, s=40, c='blue')
# Plot of the predictions trained on the training dataset slice
ax[1].plot(xfit.ravel(), model_hv.predict(xfit), color='red')
ax[1].axis([-0.1, 1.0, -2, 14])
ax[1].set_title('High-Variance Model\n' + '# Training Datapoints = ' + str(num_datapoints) + "\n" + '# Test Datapoints = ' + str(len(y_test)), size=14)
ax[1].scatter(X_test.ravel(), y_test, s=40, c='green')
ax[1].text(0.02, 0.98, "training data $MSE$ = {0:.2f}".format(mse(yTrainSlice, model_hv.predict(xTrainSlice))),ha='left', va='top', transform=ax[1].transAxes, size=14, color='blue')
ax[1].text(0.02, 0.91, "test data $MSE$ = {0:.2f}".format(mse(y_test, model_hv.predict(X_test))),ha='left', va='top', transform=ax[1].transAxes, size=14, color='green')
plot_prediction_performance(2)
plot_prediction_performance(5)
plot_prediction_performance(10)
plot_prediction_performance(20)
plot_prediction_performance(50)
plot_prediction_performance(80)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate a dataset and fit it with a high bias and a high variance model
Step2: Prediction Performance as Training Dataset Size Increases
|
13,362
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
clean_text = list(set(text))
vocab_to_int = {word: i for i, word in enumerate(clean_text) }
int_to_vocab = {i: word for word, i in vocab_to_int.items() }
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return {
".": "||Period||",
",": "||Comma||",
"\"": "||Quotation_Mark||",
";": "||Semicolon||",
"!": "||Exclamation_Mark||",
"?": "||Question_Mark||",
"(": "||Left_Parentheses||",
")": "||Right_Parentheses||",
"--":"||Dash||",
"\n":"||Return||",
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name ="target")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
layers = 2
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.8)
cell = tf.contrib.rnn.MultiRNNCell([drop] * layers)
initial_state = cell.zero_state(batch_size, layers)
return cell, tf.identity(initial_state, name="initial_state")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
return outputs, tf.identity(final_state, name="final_state")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
chars_per_batch = batch_size * seq_length
nof_batches = len(int_text) // chars_per_batch
int_text = int_text + [int_text[0]]
batches = np.zeros((nof_batches, 2, batch_size, seq_length), dtype=np.int32)
for i in range(nof_batches):
for j in range(batch_size):
idx = i * seq_length + j * nof_batches * seq_length
batches[i][0][j] = int_text[idx:idx+seq_length]
if i == nof_batches-1 and j == batch_size - 1:
batches[i][1][j] = int_text[idx+1:idx+seq_length] + [int_text[0]]
else:
batches[i][1][j] = int_text[idx+1:idx+seq_length+1]
return np.array(batches)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 300
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 10
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
get = loaded_graph.get_tensor_by_name
inputs = get("input:0")
initial_state = get("initial_state:0")
final_state = get("final_state:0")
probs = get("probs:0")
return inputs, initial_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
return int_to_vocab[np.random.choice(len(int_to_vocab), None, False, probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
13,363
|
<ASSISTANT_TASK:>
Python Code:
% matplotlib inline
import numpy as np
from scipy import signal
import numpy.polynomial.polynomial as poly
from netCDF4 import Dataset
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from eofs.standard import Eof
infile = 'data/hgt500.mon.mean.nc'
ncin = Dataset(infile, 'r')
hgt = ncin.variables['hgt'][:,0,:,:]
lat = ncin.variables['lat'][:]
lon = ncin.variables['lon'][:]
ncin.close()
nt,nlat,nlon = hgt.shape
hgt = hgt.reshape((nt,nlat*nlon), order='F')
hgt_detrend = signal.detrend(hgt, axis=0, type='linear', bp=0)
hgt_detrend = hgt_detrend.reshape((nt,nlat,nlon), order='F')
print(hgt_detrend.shape)
wgts = np.cos(np.deg2rad(lat))
wgts = wgts.reshape(len(wgts), 1)
solver = Eof(hgt_detrend, weights=wgts)
eof1 = solver.eofs(neofs=10)
pc1 = solver.pcs(npcs=10, pcscaling=0)
varfrac = solver.varianceFraction()
lambdas = solver.eigenvalues()
parallels = np.arange(-90,90,30.)
meridians = np.arange(-180,180,30)
for i in range(0,2):
fig = plt.figure(figsize=(12,9))
plt.subplot(211)
m = Basemap(projection='cyl', llcrnrlon=min(lon), llcrnrlat=min(lat), urcrnrlon=max(lon), urcrnrlat=max(lat))
x, y = m(*np.meshgrid(lon, lat))
clevs = np.linspace(np.min(eof1[i,:,:].squeeze()), np.max(eof1[i,:,:].squeeze()), 21)
cs = m.contourf(x, y, eof1[i,:,:].squeeze(), clevs, cmap=plt.cm.RdBu_r)
m.drawcoastlines()
m.drawparallels(parallels, labels=[1,0,0,0])
m.drawmeridians(meridians, labels=[1,0,0,1])
cb = m.colorbar(cs, 'right', size='5%', pad='2%')
cb.set_label('EOF', fontsize=12)
plt.title('EOF ' + str(i+1), fontsize=16)
plt.subplot(212)
days = np.linspace(1979,2003,nt)
plt.plot(days, pc1[:,i], linewidth=2)
plt.axhline(0, color='k')
plt.xlabel('Year')
plt.ylabel('PC Amplitude')
plt.ylim(np.min(pc1.squeeze()), np.max(pc1.squeeze()))
plt.figure(figsize=(11,6))
eof_num = range(1, 16)
plt.plot(eof_num, varfrac[0:15], linewidth=2)
plt.plot(eof_num, varfrac[0:15], linestyle='None', marker="o", color='r', markersize=8)
plt.axhline(0, color='k')
plt.xticks(range(1, 16))
plt.title('Fraction of the total variance represented by each EOF')
plt.xlabel('EOF #')
plt.ylabel('Variance Fraction')
plt.xlim(1, 15)
plt.ylim(np.min(varfrac), np.max(varfrac)+0.01)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Load ght500 data
Step2: 3. Detrend
Step3: 4. Carry out EOF analysis
Step4: 4.3 Retrieve the leading EOFs
Step5: 5. Visualize leading EOFs
Step6: 5.2 Check variances explained by leading EOFs
|
13,364
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from ecell4.prelude import *
with reaction_rules():
A + B == C | (0.01, 0.3)
run_simulation(10.0, {'C': 60}, volume=1.0)
from ecell4_base.core import *
from ecell4_base import *
w = ode.World(Real3(1, 1, 1))
w = ode.World(Real3(1, 1, 1))
w.add_molecules(Species('C'), 60)
print(w.t(), w.num_molecules(Species('C'))) # must return (0.0, 60)
pos = Real3(1, 2, 3)
print(pos) # must print like <ecell4.core.Real3 object at 0x7f44e118b9c0>
print(tuple(pos)) # must print (1.0, 2.0, 3.0)
pos1 = Real3(1, 1, 1)
x, y, z = pos[0], pos[1], pos[2]
pos2 = pos1 + pos1
pos3 = pos1 * 3
pos4 = pos1 / 5
print(length(pos1)) # must print 1.73205080757
print(dot_product(pos1, pos3)) # must print 9.0
import numpy
a = numpy.asarray(tuple(Real3(1, 2, 3)))
print(a) # must print [ 1. 2. 3.]
g = Integer3(1, 2, 3)
print(tuple(g))
print(tuple(Integer3(1, 2, 3) + Integer3(4, 5, 6))) # => (5, 7, 9)
print(tuple(Integer3(4, 5, 6) - Integer3(1, 2, 3))) # => (3, 3, 3)
print(tuple(Integer3(1, 2, 3) * 2)) # => (2, 4, 6)
print(dot_product(Integer3(1, 2, 3), Integer3(4, 5, 6))) # => 32
print(length(Integer3(1, 2, 3))) # => 3.74165738677
with reaction_rules():
A + B > C | 0.01 # equivalent to create_binding_reaction_rule
C > A + B | 0.3 # equivalent to create_unbinding_reaction_rule
m = get_model()
sim = ode.Simulator(w, m)
sim.run(10.0)
print(w.t(), w.num_molecules(Species('C'))) # must return (10.0, 30)
w = ode.World(Real3(1, 1, 1))
w.add_molecules(Species('C'), 60)
sim = ode.Simulator(w, m)
obs = FixedIntervalNumberObserver(0.1, ('A', 'C'))
sim.run(10.0, obs)
print(obs.data()) # must return [[0.0, 0.0, 60.0], ..., [10.0, 29.994446899691276, 30.005553100308752]]
show(obs)
from ecell4 import *
with reaction_rules():
A + B == C | (0.01, 0.3)
m = get_model()
# ode.World -> gillespie.World
w = gillespie.World(Real3(1, 1, 1))
w.add_molecules(Species('C'), 60)
# ode.Simulator -> gillespie.Simulator
sim = gillespie.Simulator(w, m)
obs = FixedIntervalNumberObserver(0.1, ('A', 'C'))
sim.run(10.0, obs)
show(obs, step=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we give you a breakdown for run_simulation.
Step2: Real3 is a coordinate vector.
Step3: Use add_molecules to add molecules, remove_molecules to remove molecules, num_molecules to know the number of molecules.
Step4: You can not print the contents in Real3 object directly.
Step5: You can use basic function like dot_product.
Step6: Integer3 represents a triplet of integers.
Step7: Of course, you can also apply simple arithmetics to Integer3.
Step8: 8.3. Creating and Running ODE Simulator
Step9: then call run method, the simulation will run.
Step10: You can see that the number of the Species C decreases from 60 to 30.
Step11: There are several types of Observers for E-Cell4.
Step12: This plots the time-series result easily.
|
13,365
|
<ASSISTANT_TASK:>
Python Code:
def unique_digits(x):
odd_digit_elements = []
for i in x:
if all (int(c) % 2 == 1 for c in str(i)):
odd_digit_elements.append(i)
return sorted(odd_digit_elements)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
13,366
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
df = pd.read_csv('./data/weight-height.csv')
df.head()
df.plot(kind = 'scatter',
figsize = (7, 7),
x = 'Height',
y = 'Weight',
title = 'Weight and Height in adults')
df.plot(kind = 'scatter',
figsize = (7, 7),
x = 'Height',
y = 'Weight',
title = 'Weight and Height in adults')
# Here we're plotting the red line 'by hand' with fixed values
# We'll try to learn this line with an algorithm below
plt.plot([55, 78], [75, 250], color='red', linewidth=3)
def line(x, w=0, b=0):
return x * w + b
x = np.linspace(55, 80, 100)
x
yhat = line(x, w = 0, b = 0)
yhat
df.plot(kind = 'scatter',
figsize = (7, 7),
x = 'Height',
y = 'Weight',
title = 'Weight and Height in adults')
plt.plot(x, yhat, color='red', linewidth=3)
def mean_squared_error(y_true, y_pred):
s = (y_true - y_pred) ** 2
return s.mean()
X = df[['Height']].values
y_true = df['Weight'].values
y_true
y_pred = line(X)
y_pred
mean_squared_error(y_true, y_pred.ravel())
plt.figure(figsize=(10, 5))
# we are going to draw 2 plots in the same figure
# first plot, data and a few lines
ax1 = plt.subplot(121)
df.plot(kind = 'scatter',
x = 'Height',
y = 'Weight',
title = 'Weight and Height in adults', ax=ax1)
# let's explore the cost function for a few values of b between -100 and +150
bbs = np.array([-100, -50, 0, 50, 100, 150])
mses = [] # we will append the values of the cost here, for each line
for b in bbs:
y_pred = line(X, w = 2, b = b)
mse = mean_squared_error(y_true, y_pred)
mses.append(mse)
plt.plot(X, y_pred)
# second plot: Cost function
ax2 = plt.subplot(122)
plt.plot(bbs, mses, 'o-')
plt.title('Cost as a function of b')
plt.xlabel('b')
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam, SGD
model = Sequential()
model.add(Dense(1, input_shape=(1,)))
model.summary()
model.compile(Adam(lr = 0.8), loss = 'mean_squared_error')
model.fit(X, y_true, epochs = 40)
y_pred = model.predict(X)
df.plot(kind = 'scatter',
x = 'Height',
y = 'Weight',
title = 'Weight and Height in adults')
plt.plot(X, y_pred, color='red')
W, B = model.get_weights()
W
B
from sklearn.metrics import r2_score
print("The R2 score is {:0.3f}".format(r2_score(y_true, y_pred)))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y_true,
test_size=0.2)
len(X_train)
len(X_test)
W[0, 0] = 0.0
B[0] = 0.0
model.set_weights((W, B))
model.fit(X_train, y_train, epochs = 50, verbose = 0)
y_train_pred = model.predict(X_train).ravel()
y_test_pred = model.predict(X_test).ravel()
from sklearn.metrics import mean_squared_error as mse
print("The Mean Squared Error on the Train set is:\t{:0.1f}".format(mse(y_train, y_train_pred)))
print("The Mean Squared Error on the Test set is:\t{:0.1f}".format(mse(y_test, y_test_pred)))
print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred)))
print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred)))
df = pd.read_csv('./data/user_visit_duration.csv')
df.head()
df.plot(kind = 'scatter',
x='Time (min)',
y='Buy')
model = Sequential()
model.add(Dense(1, input_shape=(1,), activation='sigmoid'))
model.compile(SGD(lr = 0.5),
loss = 'binary_crossentropy',
metrics=['accuracy'])
model.summary()
X = df[['Time (min)']].values
y = df['Buy'].values
model.fit(X, y, epochs = 25)
ax = df.plot(kind='scatter',
x = 'Time (min)',
y ='Buy',
title = 'Purchase behavior VS time spent on site')
temp = np.linspace(0, 4)
ax.plot(temp, model.predict(temp), color = 'orange')
plt.legend(['model', 'data'])
temp_class = model.predict(temp) > 0.5
ax = df.plot(kind = 'scatter',
x = 'Time (min)',
y = 'Buy',
title = 'Purchase behavior VS time spent on site')
temp = np.linspace(0, 4)
ax.plot(temp, temp_class, color = 'orange')
plt.legend(['model', 'data'])
y_pred = model.predict(X)
y_class_pred = y_pred > 0.5
from sklearn.metrics import accuracy_score
print("The accuracy score is {:0.3f}".format(accuracy_score(y, y_class_pred)))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
params = model.get_weights()
params = [np.zeros(w.shape) for w in params]
model.set_weights(params)
print("The accuracy score is {:0.3f}".format(accuracy_score(y, model.predict(X) > 0.5)))
model.fit(X_train, y_train, epochs = 25, verbose = 0)
print("The train accuracy score is {:0.3f}".format(accuracy_score(y_train, model.predict(X_train) > 0.5)))
print("The test accuracy score is {:0.3f}".format(accuracy_score(y_test, model.predict(X_test) > 0.5)))
from keras.wrappers.scikit_learn import KerasClassifier
def build_logistic_regression_model():
model = Sequential()
model.add(Dense(1,
input_shape = (1,),
activation = 'sigmoid'))
model.compile(SGD(lr = 0.5),
loss = 'binary_crossentropy',
metrics = ['accuracy'])
return model
model = KerasClassifier(build_fn = build_logistic_regression_model,
epochs = 25,
verbose = 0)
from sklearn.model_selection import cross_val_score, KFold
cv = KFold(3, shuffle = True)
scores = cross_val_score(model, X, y, cv = cv)
scores
print("The cross validation accuracy is {:0.4f} ± {:0.4f}".format(scores.mean(), scores.std()))
from sklearn.metrics import confusion_matrix
confusion_matrix(y, y_class_pred)
def pretty_confusion_matrix(y_true, y_pred, labels = ["False", "True"]):
cm = confusion_matrix(y_true, y_pred)
pred_labels = ['Predicted '+ l for l in labels]
df = pd.DataFrame(cm, index = labels, columns = pred_labels)
return df
pretty_confusion_matrix(y, y_class_pred, ['Not Buy', 'Buy'])
from sklearn.metrics import precision_score, recall_score, f1_score
print("Precision:\t{:0.3f}".format(precision_score(y, y_class_pred)))
print("Recall: \t{:0.3f}".format(recall_score(y, y_class_pred)))
print("F1 Score:\t{:0.3f}".format(f1_score(y, y_class_pred)))
from sklearn.metrics import classification_report
print(classification_report(y, y_class_pred))
df = pd.read_csv('./data/weight-height.csv')
df.head()
df['Gender'].unique()
pd.get_dummies(df['Gender'], prefix = 'Gender').head()
df['Height (feet)'] = df['Height']/12.0
df['Weight (100 lbs)'] = df['Weight']/100.0
df.describe().round(2)
from sklearn.preprocessing import MinMaxScaler
mms = MinMaxScaler()
df['Weight_mms'] = mms.fit_transform(df[['Weight']])
df['Height_mms'] = mms.fit_transform(df[['Height']])
df.describe().round(2)
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
df['Weight_ss'] = ss.fit_transform(df[['Weight']])
df['Height_ss'] = ss.fit_transform(df[['Height']])
df.describe().round(2)
plt.figure(figsize=(15, 5))
for i, feature in enumerate(['Height', 'Height (feet)', 'Height_mms', 'Height_ss']):
plt.subplot(1, 4, i+1)
df[feature].plot(kind = 'hist',
title = feature)
plt.xlabel(feature)
ex1 = pd.read_csv('./data/housing-data.csv')
ex1.head()
ex1.shape
plt.figure(figsize=(20, 5))
for i, feature in enumerate(ex1.columns):
plt.subplot(1, 4, i + 1)
ex1[feature].plot(kind = 'hist',
title = feature)
plt.xlabel(feature)
plt.tight_layout()
X = ex1[['sqft', 'bdrms', 'age']]
X.head()
Y = ex1[['price']]
Y.head()
X = X.values
Y = Y.values
X.shape
Y.shape
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD, Adam
model = Sequential()
model.add(Dense(1, input_shape = (3, )))
opt = Adam(lr = 0.8)
model.compile(optimizer = opt, loss = 'mean_squared_error')
model.summary()
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2)
model.fit(X_train, Y_train, epochs = 10)
from sklearn.metrics import r2_score
Y_train_predicted = model.predict(X_train)
Y_test_predicted = model.predict(X_test)
train_score = r2_score(Y_train, Y_train_predicted)
test_score = r2_score(Y_test, Y_test_predicted)
print('Train set score: \t{:0.3f}'.format(train_score))
print('Test set score: \t{:0.3f}'.format(test_score))
from sklearn.preprocessing import MinMaxScaler
minmax = MinMaxScaler()
X = minmax.fit_transform(X)
Y = minmax.fit_transform(Y)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2)
model = Sequential()
model.add(Dense(1, input_shape = (3, )))
opt = Adam(lr = 0.8)
model.compile(optimizer = opt, loss = 'mean_squared_error')
model.fit(X_train, Y_train, epochs = 20)
Y_train_predicted = model.predict(X_train)
Y_test_predicted = model.predict(X_test)
train_score = r2_score(Y_train, Y_train_predicted)
test_score = r2_score(Y_test, Y_test_predicted)
print('Train set score: \t{:0.3f}'.format(train_score))
print('Test set score: \t{:0.3f}'.format(test_score))
model = Sequential()
model.add(Dense(1, input_shape = (3, )))
opt = Adam(lr = 0.1)
model.compile(optimizer = opt, loss = 'mean_squared_error')
model.fit(X_train, Y_train, epochs = 20, verbose = 1)
Y_train_predicted = model.predict(X_train)
Y_test_predicted = model.predict(X_test)
train_score = r2_score(Y_train, Y_train_predicted)
test_score = r2_score(Y_test, Y_test_predicted)
print('Train set score: \t{:0.3f}'.format(train_score))
print('Test set score: \t{:0.3f}'.format(test_score))
model = Sequential()
model.add(Dense(1, input_shape = (3, )))
opt = SGD(lr = 0.1)
model.compile(optimizer = opt, loss = 'mean_squared_error')
model.fit(X_train, Y_train, epochs = 20)
Y_train_predicted = model.predict(X_train)
Y_test_predicted = model.predict(X_test)
train_score = r2_score(Y_train, Y_train_predicted)
test_score = r2_score(Y_test, Y_test_predicted)
print('Train set score: \t{:0.3f}'.format(train_score))
print('Test set score: \t{:0.3f}'.format(test_score))
ex2 = pd.read_csv('./data/HR_comma_sep.csv')
ex2.head()
ex2.info()
ex2.describe()
# Accuracy if predicted that all stay
# left = 1 : the employee already left
acc = (1 - (ex2['left'].value_counts()[1] / ex2.shape[0])) * 100
print('Predicting all would stay yields accuracy: \t{:0.2f} %'.format(acc))
len(ex2.columns)
ex2['average_montly_hours'].plot(kind = 'hist',
figsize = (10, 5))
ex2['time_spend_company'].plot(kind = 'hist',
figsize = (10, 5))
minmax = MinMaxScaler()
ex2['average_montly_hours'] = minmax.fit_transform(ex2['average_montly_hours'].reshape(-1,1))
ex2['time_spend_company'] = minmax.fit_transform(ex2['time_spend_company'].reshape(-1,1))
ex2['average_montly_hours'].plot(kind = 'hist',
figsize = (10, 5))
# Sales and salary are categorical data
ex2_dummies = pd.get_dummies(ex2[['sales', 'salary']])
ex2_dummies.head()
del ex2['sales'], ex2['salary']
ex2.head()
len(ex2.columns)
data = pd.concat([ex2, ex2_dummies], axis = 1)
data.head()
X = data.ix[:, data.columns != 'left']
X.head()
len(X.columns)
Y = data['left']
from sklearn.model_selection import train_test_split
X = X.values
Y = Y.values
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2)
model = Sequential()
model.add(Dense(1, input_dim = X_train.shape[1], activation = 'sigmoid'))
model.compile(optimizer = Adam(lr = 0.05), loss = 'binary_crossentropy', metrics = ['accuracy'])
model.summary()
model.fit(X_train, Y_train, epochs = 20)
Y_test_predicted = model.predict_classes(X_test)
from sklearn.metrics import confusion_matrix, classification_report
def pretty_confusion_matrix(y_true, y_pred, labels=["False", "True"]):
cm = confusion_matrix(y_true, y_pred)
pred_labels = ['Predicted '+ l for l in labels]
df = pd.DataFrame(cm, index = labels, columns = pred_labels)
return df
pretty_confusion_matrix(Y_test, Y_test_predicted, labels=['Stay', 'Leave'])
print(classification_report(Y_test, Y_test_predicted))
from sklearn.model_selection import cross_val_score, KFold
from keras.wrappers.scikit_learn import KerasClassifier
def build_model():
model = Sequential()
model.add(Dense(1,
input_dim = 20,
activation='sigmoid'))
model.compile(Adam(lr = 0.1), 'binary_crossentropy', metrics=['accuracy'])
return model
model = KerasClassifier(build_fn = build_model,
epochs = 10,
verbose = 0)
from sklearn.model_selection import KFold, cross_val_score
cross_val = KFold(5, shuffle = True)
scores = cross_val_score(model, X, Y, cv = cross_val)
print("The cross validation accuracy is {:0.4f} ± {:0.4f}".format(scores.mean(), scores.std()))
scores
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cost Function
Step2: Manual exploration of different values of W and b
Step3: Linear Regression with Keras
Step4: Evaluating Model Performance
Step5: Train Test Split
Step6: Classification
Step7: Train/Test split
Step8: Cross Validation
Step9: Confusion Matrix
Step10: Feature Preprocessing
Step11: Feature Transformations
Step12: MinMax normalization
Step13: 3) Standard normalization
Step14: Machine Learning Exercises
Step15: MinMaxScaler
Step16: Lowering the learning rate
Step17: Using SGD
Step18: Exercise 2
|
13,367
|
<ASSISTANT_TASK:>
Python Code:
os.chdir('../results')
molecule_string = []
casrn = []
test_type = []
dose = []
dose_amount = []
dose_units = []
route = []
organism = []
source = []
rootdir = '.'
fnames = []
for dirpath, subdirlist, filelist in os.walk(rootdir):
# Remove the _cas directory, as the files there do not appear to contain valid information
# .DS_Store appears only on systems running OSX
if '_cas' in subdirlist:
subdirlist.remove('_cas')
fnames.append( [os.path.join(dirpath, name) for name in filelist if (name != '.DS_Store')] )
for first_level in fnames:
if len(first_level):
print "Processing directory " + first_level[0].split('/')[1]
for f in first_level:
try:
current_mol = pb.readfile('sdf', f).next()
except StopIteration: # Some files are empty, in that case just skip and continue
print f, " --- file empty ---"
continue
molecule_string.append(str(current_mol).strip())
try:
tox_data = current_mol.data['chemid_tox_info'].split('\n')[1].split()
except KeyError:
print f, " --- no toxicology information ---"
continue
casrn.append(current_mol.data['casrn'])
test_type.append(tox_data[1])
dose_components = re.match(r"([0-9.]+)([a-z/]+)", tox_data[2], re.I)
if dose_components:
dose_amount.append(dose_components.groups()[0])
dose_units.append(dose_components.groups()[1])
else:
dose_amount.append('NA')
dose_units.append('NA')
route.append(tox_data[3])
organism.append(tox_data[4])
d = {'molecule_string':pd.Series(molecule_string),
'casrn':pd.Series(casrn),
'test_type':pd.Series(test_type),
'dose_amount':pd.Series(dose_amount),
'dose_units':pd.Series(dose_units),
'route':pd.Series(route),
'organism':pd.Series(organism)}
df = pd.DataFrame(d)
df.to_csv('../molecule_data.txt', sep='|', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following expression traverses the current directory tree and accumulates the file names in fnames
Step2: Read each file in the list as a pybel molecule object and extract the relevant information from it (this takes a while for the whole data bank, ~5min)
Step3: Convert the lists with the information to pandas objects and construct a data frame with them
Step4: Finally write the data frame to disk (change the parameters if necessary)
|
13,368
|
<ASSISTANT_TASK:>
Python Code:
import pkg_resources
if pkg_resources.get_distribution('CGRtools').version.split('.')[:2] != ['4', '0']:
print('WARNING. Tutorial was tested on 4.0 version of CGRtools')
else:
print('Welcome!')
# load data for tutorial
from pickle import load
from traceback import format_exc
with open('molecules.dat', 'rb') as f:
molecules = load(f) # list of MoleculeContainer objects
with open('reactions.dat', 'rb') as f:
reactions = load(f) # list of ReactionContainer objects
m1, m2, m3, m4 = molecules # molecule
r2 = reactions[2] # reaction
m3 # molecule with kekulized ring
m3.standardize() # apply standardization. Returns True if any group found
m3 # group-standardized structure.
m3.thiele() # aromatizes and returns True then any ring found
m3
m3.explicify_hydrogens() # return number of added hydrogens
m3.clean2d(randomize=True) # for added hydrogen atoms coordinates are not calculated. Thus, it looks like hydrogen has the same position on image
m3
m3.kekule()
m3.implicify_hydrogens()
m3
r2
r2.standardize()
r2.explicify_hydrogens()
r2.clean2d(randomize=True)
r2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3.1. Molecules
Step2: Molecules has explicify_hydrogens and implicify_hydrogens methods to handle hydrogens.
Step3: 3.2. Reactions standardization
|
13,369
|
<ASSISTANT_TASK:>
Python Code:
from msmbuilder.dataset import dataset
import mdtraj as md
import numpy as np
from glob import glob
from mdtraj.utils import timing
import itertools
from msmbuilder.featurizer import AtomPairsFeaturizer
from msmbuilder.decomposition import tICA
from msmbuilder.cluster import MiniBatchKMeans
from msmbuilder.cluster import MiniBatchKMedoids
%matplotlib inline
from matplotlib import pyplot as plt
traj_list = sorted(glob("/Users/je714/wt_data/*/05*nc"))
top = md.load_prmtop("/Users/je714/wt_data/run8/WT-ff14SB_clean.prmtop")
ef_hand = top.select("resid 64 to 74")
cat_cal = top.select("resid 419")
atom_sel = np.append(ef_hand, cat_cal)
atom_sel
xyz = dataset("/Users/je714/wt_data/*/05*nc", fmt='mdtraj',
topology="/Users/je714/wt_data/run8/WT-ff14SB_clean.prmtop",
atom_indices=atom_sel)
pairs = np.asarray(list(itertools.product(list(range(139)), [138])))
print(pairs.shape)
pairs_noIJ = pairs[pairs[:,0] != pairs[:,1]] # Drop out the elements that are i=j (whose distance is 0)
pairs_noIJ
traj1 = xyz[0]
dist_featurizer = AtomPairsFeaturizer(pair_indices=pairs)
with timing("Featurizing into distances..."):
dists = dist_featurizer.fit_transform(xyz)
tica_model = tICA(n_components=10, lag_time=1)
with timing("tica..."):
tica_trajs_dists = tica_model.fit_transform(dists)
tica_model = tICA
tica_trajs_dists[0].shape
def tica_plotter(tica_trajs):
txx = np.concatenate(tica_trajs)
plt.figure(figsize=(14, 4))
plt.subplot(1, 2, 1)
plt.hexbin(txx[:, 0], txx[:, 1], bins='log', mincnt=1)
plt.xlabel('tIC 1')
plt.ylabel('tIC 2')
cb = plt.colorbar()
cb.set_label('log10(N)')
plt.subplot(1, 2, 2)
time = np.linspace(start=0.0, stop=txx.shape[0]*0.02, num = txx.shape[0]) / 1000
plt.plot(time, txx[:, 0])
plt.xlabel("Aggregated time ($\mu$s)")
plt.ylabel("tIC 1")
plt.tight_layout()
tica_plotter(tica_trajs_dists)
clusterer = MiniBatchKMeans(n_clusters=400)
with timing("Clustering tica trajs from dist featurization..."):
clustered_trajs = clusterer.fit_transform(tica_trajs_dists)
def cluster_plotter(trajectory, clusterer_object):
txx = np.concatenate(trajectory)
plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1)
plt.scatter(clusterer_object.cluster_centers_[:,0],
clusterer_object.cluster_centers_[:,1],
s=100, c='w')
cluster_plotter(tica_trajs_dists, clusterer)
from msmbuilder.msm import MarkovStateModel
from msmbuilder.utils import dump
msm = MarkovStateModel(lag_time=50)
msm.fit(clustered_trajs)
print("The MSM has %s states.\n" % msm.n_states_)
print(msm.left_eigenvectors_.shape)
plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap="Greys")
plt.scatter(clusterer.cluster_centers_[:,0],
clusterer.cluster_centers_[:,1],
s=1e4 * msm.populations_, # size by population
c=msm.left_eigenvectors_[:,1], # color by eigenvector
cmap="RdBu")
plt.colorbar(label='First dynamical eigenvector')
plt.xlabel('tIC 1')
plt.ylabel('tIC 2')
plt.tight_layout()
from msmbuilder.lumping import PCCAPlus
pcca = PCCAPlus.from_msm(msm, n_macrostates=5)
macro_trajs = pcca.transform(clustered_trajs)
plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap="Greys")
plt.scatter(clusterer.cluster_centers_[:,0],
clusterer.cluster_centers_[:,1],
s=100,
c=pcca.microstate_mapping_,
)
plt.xlabel('tIC 1')
plt.ylabel('tIC 2')
from sklearn.pipeline import Pipeline
from sklearn.cross_validation import KFold
from sklearn.grid_search import RandomizedSearchCV
from scipy.stats.distributions import randint
model = Pipeline([
('featurizer', AtomPairsFeaturizer(pair_indices=pairs)),
('tica', tICA(n_components=4)),
('clusterer', MiniBatchKMeans()),
('msm', MarkovStateModel(n_timescales=3, ergodic_cutoff='on', reversible_type='mle', verbose=False))
])
search = RandomizedSearchCV(model, n_iter=50, cv=3, refit=False, param_distributions={
'tica__lag_time':randint(1,100),
'clusterer__n_clusters':randint(1,1000),
'msm__lag_time':randint(1,100)
})
def plot_scores(score_matrix):
from matplotlib import pyplot as pp
scores = np.array([[np.mean(e.cv_validation_scores),
np.std(e.cv_validation_scores),
e.parameters['tica__lag_time'],
e.parameters['clusterer__n_clusters'],
e.parameters['msm__lag_time']]
for e in score_matrix.grid_scores_])
mean_score_value = scores[:,0]
std_score_value = scores[:,1]
lags_tica = scores[:,2]
cluster_number = scores[:,3]
lags_msm = scores[:,4]
pp.figure(figsize=(14,4))
pp.grid(False)
interval = 2*std_score_value
# subplot1
pp.subplot(1,3,1, axisbg='white')
pp.errorbar(x=lags_tica, y=mean_score_value, yerr=interval, color = 'b', fmt='o')
pp.plot(pp.xlim(), [search.best_score_]*2, 'k-.', label = 'best')
pp.xlabel("tICA lag time (Frame)")
pp.ylabel("Score")
# subplot2
pp.subplot(1,3,2, axisbg='white')
pp.errorbar(x=cluster_number, y=mean_score_value, yerr=interval, color = 'r', fmt='o')
pp.plot(pp.xlim(), [search.best_score_]*2, 'k-.', label = 'best')
pp.xlabel("Number of clusters")
pp.ylabel("Score")
# subplot3
pp.subplot(1,3,3, axisbg='white')
pp.errorbar(x=lags_msm, y=mean_score_value, yerr=interval, color = 'g', fmt='o')
pp.plot(pp.xlim(), [search.best_score_]*2, 'k-.', label = 'best')
pp.xlabel("MSM lag time (Frame)")
pp.ylabel("Score")
pp.tight_layout()
# with timing("Cross validation..."):
# xyz_scores = search.fit(xyz)
# for key, value in xyz_scores.best_params_.items():
# print("%s\t%s" % (key, value))
# print("Score = %.2f" % xyz_scores.best_score_)
# plot_scores(xyz_scores)
model = Pipeline([
('tica', tICA()),
('clusterer', MiniBatchKMedoids()),
('msm', MarkovStateModel(ergodic_cutoff='on', reversible_type='mle', verbose=False))
])
search = RandomizedSearchCV(model, n_iter=100, cv=5, refit=False, param_distributions={
'tica__lag_time':randint(1,200),
'tica__n_components':randint(1,15),
'clusterer__n_clusters':randint(1,1000),
'msm__n_timescales':randint(1,15),
'msm__lag_time':randint(1,100)
})
print(search)
with timing("Cross validation on trajectories already fit to distance feature..."):
feat_scores = search.fit(dists)
feat_scores.best_params_
plot_scores(feat_scores)
opt_tica = tICA(lag_time=118, n_components=13)
opt_cluster = MiniBatchKMedoids(n_clusters=346)
opt_msm = MarkovStateModel(lag_time=20, n_timescales=14)
opt_tica_trajs = opt_tica.fit_transform(dists)
tica_plotter(opt_tica_trajs)
clustered_opt_tica_trajs = opt_cluster.fit_transform(opt_tica_trajs)
cluster_plotter(opt_tica_trajs, opt_cluster)
opt_msm.fit(clustered_opt_tica_trajs)
lag_times = [x*2 for x in range(1,55,2)]
print(lag_times)
opt_msm.timescales_[0:3]
first_three_timescales = []
for lag_time in lag_times:
msm = MarkovStateModel(lag_time=lag_time, n_timescales=14)
msm.fit(clustered_opt_tica_trajs)
first_three_timescales.append(msm.timescales_[0:3])
first_three_timescales
first_timescale = []
for comb in first_three_timescales:
first_timescale.append(comb[0])
plt.scatter(x=lag_times, y=first_timescale)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We featurize the trajectories into the pairwise distance between all the atoms in the EF hand and the catalytic Ca<sup>2+</sup>. Since we only loaded the relevant atoms, msmbuilder renumbers the indices from 0.
Step2: Cross validation
Step3: feat_scores.grid_scores_
|
13,370
|
<ASSISTANT_TASK:>
Python Code:
data_in = '272091-815432'
def criteria(word):
meets = True
if '11' in word or \
'22' in word or \
'33' in word or \
'44' in word or \
'55' in word or \
'66' in word or \
'77' in word or \
'88' in word or \
'99' in word:
last_num = None
for x in word:
if last_num is not None:
if int(x) < last_num:
meets = False
last_num = int(x)
else:
meets = False
return meets
criteria('11111'), criteria('223450'), criteria('123789'),
count = 0
for i in range(*map(int, data_in.split('-'))):
if criteria(str(i)):
count += 1
print(count)
def criteria_2(word):
meets = True
this_digit_ok = [False]*10
for dig in '1234567890':
if dig*2 in word:
this_digit_ok[int(dig)] = True
for length in range(3,7):
if dig*length in word:
this_digit_ok[int(dig)] = False
if any(this_digit_ok):
last_num = None
for x in word:
if last_num is not None:
if int(x) < last_num:
meets = False
last_num = int(x)
else:
return False
return meets
criteria_2('112233'), criteria_2('123444'), criteria_2('111122'),
count = 0
for i in range(*map(int, data_in.split('-'))):
if criteria_2(str(i)):
count += 1
print(count)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: --- Part Two ---
|
13,371
|
<ASSISTANT_TASK:>
Python Code:
import urllib2
import csv
import pandas as pd
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
%matplotlib inline
url_X_train = 'http://statweb.stanford.edu/~tibs/ElemStatLearn/datasets/14cancer.xtrain'
url_y_train = 'http://statweb.stanford.edu/~tibs/ElemStatLearn/datasets/14cancer.ytrain'
url_X_test = 'http://statweb.stanford.edu/~tibs/ElemStatLearn/datasets/14cancer.xtest'
url_y_test = 'http://statweb.stanford.edu/~tibs/ElemStatLearn/datasets/14cancer.ytest'
# We know there are 144 tumours in the training set and 54 is the test set, so let's make some column names:
column_names_train = ["Tumour_Sample_" + str(_) for _ in np.arange(144)+1]
column_names_test = ["Tumour_Sample_" + str(_) for _ in np.arange(54)+1]
# We will use Pandas to read and properly format the text-based data.
# The delimiter is a regular expression to look for zero or more repetitions of whitespace (\s).
X_train = pd.read_csv(url_X_train, delimiter='\s*', engine='python', names=column_names_train)
X_test = pd.read_csv(url_X_test, delimiter='\s*', engine='python', names=column_names_test)
# Get the labels and store as a list. There are 14 different cancers in the dataset.
y_train = urllib2.urlopen(url_y_train).read().strip().split()
y_test = urllib2.urlopen(url_y_test).read().strip().split()
# There are 14 different types of cancer, numbered 1 to 14, in the vectors y_test and y_train above.
# For visualising, you may find the names of the cancer types useful:
cancer_names_longform = ["Breast adenocarcinoma", "Prostate adenocarcinoma",
"Lung adenocarcinoma", "Collerectal adenocarcinoma",
"Lymphoma", "Bladder transitional cell carcinoma",
"Melanoma", "Uterine adenocarcinoma", "Leukemia",
"Renal cell carcinoma", "Pancreatic adenocarcinoma",
"Ovarian adenocarcinoma", "Pleural mesothelioma",
"Central nervous system"]
cancer_names_shortform = ["breast", "prostate", "lung", "collerectal",
"lymphoma", "bladder", "melanoma",
"uterus", "leukemia", "renal", "pancreas",
"ovary", "meso", "cns"]
# For testing you may want a merged training and test set.
# To save memory, these are commented out for now.
# X = pd.concat([X_train, X_test])
# y = y_train + y_test
X_train.shape
X_train.head()
X_test.tail()
len(np.unique(y_train))
stats.itemfreq(y_train)
for freq in stats.itemfreq(y_train):
print "%s samples appear %s times (shortform: %s)." % (cancer_names_longform[int(freq[0])-1],
freq[1],
cancer_names_shortform[int(freq[0])-1])
# Note: The transpose() function here does not permanently transpose the data stored in X_train.
X_train.transpose().describe()
# Your clustering code. Use as many cells as required, use Markdown cells to document where necessary.
# Your classification code. Use as many cells as required, use Markdown cells to document where necessary.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Exploration
Step2: To see a preview of the data, we can use the head and tail functions
Step3: Let's see how the classes are distributed. First let's look at the number of unique values, which should equal 14, as we know we have 14 different cancer types
Step4: We can see how the cancer types are distrubuted using the itemfreq function of the SciPy stats package
Step5: Using the cancer_names_longform list we declared above, we can print tumour frequencies nicely
Step6: You can take a quick look at some statistics values for each gene using the useful describe function (we use transpose to perform the analysis on a gene-by-gene basis). For example you may want to look at mean expression levels for each gene to see if they are over-expressed or under-expressed
Step7: Summary
Step8: Classification
|
13,372
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install dm-tree
!pip install dm-reverb[tensorflow]
import reverb
import tensorflow as tf
OBSERVATION_SPEC = tf.TensorSpec([10, 10], tf.uint8)
ACTION_SPEC = tf.TensorSpec([2], tf.float32)
def agent_step(unused_timestep) -> tf.Tensor:
return tf.cast(tf.random.uniform(ACTION_SPEC.shape) > .5,
ACTION_SPEC.dtype)
def environment_step(unused_action) -> tf.Tensor:
return tf.cast(tf.random.uniform(OBSERVATION_SPEC.shape, maxval=256),
OBSERVATION_SPEC.dtype)
# Initialize the reverb server.
simple_server = reverb.Server(
tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(2),
# The signature is optional but it is good practice to set it as it
# enables data validation and easier dataset construction. Note that
# we prefix all shapes with a 3 as the trajectories we'll be writing
# consist of 3 timesteps.
signature={
'actions':
tf.TensorSpec([3, *ACTION_SPEC.shape], ACTION_SPEC.dtype),
'observations':
tf.TensorSpec([3, *OBSERVATION_SPEC.shape],
OBSERVATION_SPEC.dtype),
},
)
],
# Sets the port to None to make the server pick one automatically.
# This can be omitted as it's the default.
port=None)
# Initializes the reverb client on the same port as the server.
client = reverb.Client(f'localhost:{simple_server.port}')
# Dynamically adds trajectories of length 3 to 'my_table' using a client writer.
with client.trajectory_writer(num_keep_alive_refs=3) as writer:
timestep = environment_step(None)
for step in range(4):
action = agent_step(timestep)
writer.append({'action': action, 'observation': timestep})
timestep = environment_step(action)
if step >= 2:
# In this example, the item consists of the 3 most recent timesteps that
# were added to the writer and has a priority of 1.5.
writer.create_item(
table='my_table',
priority=1.5,
trajectory={
'actions': writer.history['action'][-3:],
'observations': writer.history['observation'][-3:],
}
)
# Dataset samples sequences of length 3 and streams the timesteps one by one.
# This allows streaming large sequences that do not necessarily fit in memory.
dataset = reverb.TrajectoryDataset.from_table_signature(
server_address=f'localhost:{simple_server.port}',
table='my_table',
max_in_flight_samples_per_worker=10)
# Batches 2 sequences together.
# Shapes of items is now [2, 3, 10, 10].
batched_dataset = dataset.batch(2)
for sample in batched_dataset.take(1):
# Results in the following format.
print(sample.info.key) # ([2], uint64)
print(sample.info.probability) # ([2], float64)
print(sample.data['observations']) # ([2, 3, 10, 10], uint8)
print(sample.data['actions']) # ([2, 3, 2], float32)
EPISODE_LENGTH = 150
complete_episode_server = reverb.Server(tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(2),
# The signature is optional but it is good practice to set it as it
# enables data validation and easier dataset construction. Note that
# the number of observations is larger than the number of actions.
# The extra observation is the terminal state where no action is
# taken.
signature={
'actions':
tf.TensorSpec([EPISODE_LENGTH, *ACTION_SPEC.shape],
ACTION_SPEC.dtype),
'observations':
tf.TensorSpec([EPISODE_LENGTH + 1, *OBSERVATION_SPEC.shape],
OBSERVATION_SPEC.dtype),
},
),
])
# Initializes the reverb client on the same port.
client = reverb.Client(f'localhost:{complete_episode_server.port}')
# Writes whole episodes of varying length to a Reverb server.
NUM_EPISODES = 10
# We know that episodes are at most 150 steps so we set the writer buffer size
# to 151 (to capture the final observation).
with client.trajectory_writer(num_keep_alive_refs=151) as writer:
for _ in range(NUM_EPISODES):
timestep = environment_step(None)
for _ in range(EPISODE_LENGTH):
action = agent_step(timestep)
writer.append({'action': action, 'observation': timestep})
timestep = environment_step(action)
# The astute reader will recognize that the final timestep has not been
# appended to the writer. We'll go ahead and add it WITHOUT an action. The
# writer will automatically fill in the gap with `None` for the action
# column.
writer.append({'observation': timestep})
# Now that the entire episode has been added to the writer buffer we can an
# item with a trajectory that spans the entire episode. Note that the final
# action must not be included as it is None and the trajectory would be
# rejected if we tried to include it.
writer.create_item(
table='my_table',
priority=1.5,
trajectory={
'actions': writer.history['action'][:-1],
'observations': writer.history['observation'][:],
})
# This call blocks until all the items (in this case only one) have been
# sent to the server, inserted into respective tables and confirmations
# received by the writer.
writer.end_episode(timeout_ms=1000)
# Ending the episode also clears the history property which is why we are
# able to use `[:]` in when defining the trajectory above.
assert len(writer.history['action']) == 0
assert len(writer.history['observation']) == 0
# Each sample is an entire episode.
# Adjusts the expected shapes to account for the whole episode length.
dataset = reverb.TrajectoryDataset.from_table_signature(
server_address=f'localhost:{complete_episode_server.port}',
table='my_table',
max_in_flight_samples_per_worker=10,
rate_limiter_timeout_ms=10)
# Batches 128 episodes together.
# Each item is an episode of the format (observations, actions) as above.
# Shape of items are now ([128, 151, 10, 10], [128, 150, 2]).
dataset = dataset.batch(128)
# Sample has type reverb.ReplaySample.
for sample in dataset.take(1):
# Results in the following format.
print(sample.info.key) # ([128], uint64)
print(sample.info.probability) # ([128], float64)
print(sample.data['observations']) # ([128, 151, 10, 10], uint8)
print(sample.data['actions']) # ([128, 150, 2], float32)
multitable_server = reverb.Server(
tables=[
reverb.Table(
name='my_table_a',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(1)),
reverb.Table(
name='my_table_b',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(1)),
])
client = reverb.Client('localhost:{}'.format(multitable_server.port))
with client.trajectory_writer(num_keep_alive_refs=3) as writer:
timestep = environment_step(None)
for step in range(4):
writer.append({'timestep': timestep})
action = agent_step(timestep)
timestep = environment_step(action)
if step >= 1:
writer.create_item(
table='my_table_b',
priority=4-step,
trajectory=writer.history['timestep'][-2:])
if step >= 2:
writer.create_item(
table='my_table_a',
priority=4-step,
trajectory=writer.history['timestep'][-3:])
reverb.Server(tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
rate_limiter=reverb.rate_limiters.MinSize(100)),
])
max_size = 1000
reverb.Server(tables=[
reverb.Table(
name='my_priority_queue',
sampler=reverb.selectors.MaxHeap(),
remover=reverb.selectors.MinHeap(),
max_size=max_size,
rate_limiter=reverb.rate_limiters.MinSize(int(0.95 * max_size)),
max_times_sampled=1,
)
])
reverb.Server(
tables=[
reverb.Table.queue(name='my_queue', max_size=10000),
reverb.Table(
name='my_circular_buffer',
sampler=reverb.selectors.Fifo(),
remover=reverb.selectors.Fifo(),
max_size=10000,
max_times_sampled=1,
rate_limiter=reverb.rate_limiters.MinSize(1)),
])
reverb.Server(
tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
rate_limiter=reverb.rate_limiters.SampleToInsertRatio(
samples_per_insert=3.0, min_size_to_sample=3,
error_buffer=3.0)),
])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Environments
Step2: The code below defines a dummy RL environment for use in the examples below.
Step3: Creating a Server and Client
Step4: For details on customizing the sampler, remover, and rate limiter, see below.
Step5: The animation illustrates the state of the server at each step in the
Step6: Example 2
Step7: Inserting Complete Episodes
Step8: Sampling Complete Episodes in TensorFlow
Step9: Example 3
Step10: Inserting Sequences of Varying Length into Multiple Priority Tables
Step11: This diagram shows the state of the server after executing the above cell.
Step12: Creating a Server with a MaxHeap Sampler and a MinHeap Remover
Step13: Creating a Server with One Queue and One Circular Buffer
Step14: Example 5
|
13,373
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
cd hyperspectral
import neon_aop_hyperspectral as neon_hs
refl, metadata = neon_hs.aop_h5refl2array('../../../data/NEON_D16_MCRA_DP3_566300_4901000_reflectance.h5')
b56 = refl[:,:,55]
neon_hs.plot_aop_refl(b56,
metadata['spatial extent'],
colorlimit=(0,0.3),
title='Band 56',
cmap_title='Reflectance',
colormap='gist_earth')
pixel_df = pd.DataFrame()
pixel_df['reflectance'] = refl[50,50,:]
pixel_df['wavelengths'] = metadata['wavelength']
print(pixel_df.head(5))
print(pixel_df.tail(5))
pixel_df.plot(x='wavelengths',y='reflectance',kind='scatter',edgecolor='none')
plt.title('Spectral Signature for Pixel (50,50)')
ax = plt.gca()
ax.set_xlim([np.min(pixel_df['wavelengths']),np.max(pixel_df['wavelengths'])])
ax.set_ylim([np.min(pixel_df['reflectance']),np.max(pixel_df['reflectance'])])
ax.set_xlabel("Wavelength, nm")
ax.set_ylabel("Reflectance")
ax.grid('on')
bbw1 = metadata['bad band window1'];
bbw2 = metadata['bad band window2'];
print('Bad Band Window 1:',bbw1)
print('Bad Band Window 2:',bbw2)
pixel_df.plot(x='wavelengths',y='reflectance',kind='scatter',edgecolor='none');
plt.title('Spectral Signature for Pixel (50,50)')
ax1 = plt.gca(); ax1.grid('on')
ax1.set_xlim([np.min(pixel_df['wavelengths']),np.max(pixel_df['wavelengths'])]);
ax1.set_ylim(0,0.5)
ax1.set_xlabel("Wavelength, nm"); ax1.set_ylabel("Reflectance")
#Add in red dotted lines to show boundaries of bad band windows:
ax1.plot((1340,1340),(0,1.5), 'r--')
ax1.plot((1445,1445),(0,1.5), 'r--')
ax1.plot((1790,1790),(0,1.5), 'r--')
ax1.plot((1955,1955),(0,1.5), 'r--')
import copy
w = metadata['wavelength'].copy() #make a copy to deal with the mutable data type
w[((w >= 1340) & (w <= 1445)) | ((w >= 1790) & (w <= 1955))]=np.nan #can also use bbw1[0] or bbw1[1] to avoid hard-coding in
w[-10:]=np.nan; # the last 10 bands sometimes have noise - best to eliminate
#print(w) #optionally print wavelength values to show that -9999 values are replaced with nan
#define index corresponding to nan values:
nan_ind = np.argwhere(np.isnan(w))
#define refl_band, refl, and metadata
refl_band = b56
from ipywidgets import *
def spectraPlot(pixel_x,pixel_y):
reflectance = refl[pixel_y,pixel_x,:]
reflectance[nan_ind]=np.nan
pixel_df = pd.DataFrame()
pixel_df['reflectance'] = reflectance
pixel_df['wavelengths'] = w
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(1,2,1)
# fig, axes = plt.subplots(nrows=1, ncols=2)
pixel_df.plot(ax=ax1,x='wavelengths',y='reflectance',kind='scatter',edgecolor='none');
ax1.set_title('Spectra of Pixel (' + str(pixel_x) + ',' + str(pixel_y) + ')')
ax1.set_xlim([np.min(metadata['wavelength']),np.max(metadata['wavelength'])]);
ax1.set_ylim([np.min(pixel_df['reflectance']),np.max(pixel_df['reflectance']*1.1)])
ax1.set_xlabel("Wavelength, nm"); ax1.set_ylabel("Reflectance")
ax1.grid('on')
ax2 = fig.add_subplot(1,2,2)
plot = plt.imshow(refl_band,extent=metadata['spatial extent'],clim=(0,0.1));
plt.title('Pixel Location');
cbar = plt.colorbar(plot,aspect=20); plt.set_cmap('gist_earth');
cbar.set_label('Reflectance',rotation=90,labelpad=20);
ax2.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax2.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
ax2.plot(metadata['spatial extent'][0]+pixel_x,metadata['spatial extent'][3]-pixel_y,'s',markersize=5,color='red')
ax2.set_xlim(metadata['spatial extent'][0],metadata['spatial extent'][1])
ax2.set_ylim(metadata['spatial extent'][2],metadata['spatial extent'][3])
interact(spectraPlot, pixel_x = (0,refl.shape[1]-1,1),pixel_y=(0,refl.shape[0]-1,1));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the hyperspectral functions into the variable neon_hs (for neon hyperspectral)
Step2: Optionally, you can view the data stored in the metadata dictionary, and print the minimum, maximum, and mean reflectance values in the tile. In order to handle any nan values, use the Numpy package (np) functions nanmin nanmax and nanmean.
Step3: We can use pandas to create a dataframe containing the wavelength and reflectance values for a single pixel - in this example, we'll look at the center pixel of the tile (500,500).
Step4: We can preview the first and last five values of the dataframe using head and tail
Step5: We can now plot the spectra, stored in this dataframe structure. pandas has a built in plotting routine, which can be called by typing .plot at the end of the dataframe.
Step6: Water Vapor Band Windows
Step7: Below we repeat the plot we made above, but this time draw in the edges of the water vapor band windows that we need to remove.
Step8: We can now set these bad band windows to nan, along with the last 10 bands, which are also often noisy (as seen in the spectral profile plotted above). First make a copy of the wavelengths so that the original metadata doesn't change.
Step9: DEMO
|
13,374
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
import socnet as sn
sn.graph_width = 320
sn.graph_height = 180
g = sn.load_graph('2-largura.gml', has_pos=True)
sn.show_graph(g)
from math import inf, isinf
from queue import Queue
s = 1
q = Queue()
for n in g.nodes():
g.node[n]['d'] = inf
g.node[s]['d'] = 0
q.put(s)
while not q.empty():
n = q.get()
for m in g.neighbors(n):
if isinf(g.node[m]['d']):
g.node[m]['d'] = g.node[n]['d'] + 1
q.put(m)
for n in g.nodes():
print('distância de {}: {}'.format(n, g.node[n]['d']))
def snapshot(g, frames):
for n in g.nodes():
if isinf(g.node[n]['d']):
g.node[n]['label'] = '∞'
else:
g.node[n]['label'] = str(g.node[n]['d'])
frame = sn.generate_frame(g, nlab=True)
frames.append(frame)
red = (255, 0, 0) # linha nova
blue = (0, 0, 255) # linha nova
frames = [] # linha nova
s = 1
q = Queue()
for n in g.nodes():
g.node[n]['d'] = inf
g.node[s]['d'] = 0
q.put(s)
sn.reset_node_colors(g) # linha nova
sn.reset_edge_colors(g) # linha nova
snapshot(g, frames) # linha nova
while not q.empty():
n = q.get()
g.node[n]['color'] = red # linha nova
snapshot(g, frames) # linha nova
for m in g.neighbors(n):
g.edge[n][m]['color'] = red # linha nova
snapshot(g, frames) # linha nova
if isinf(g.node[m]['d']):
g.node[m]['d'] = g.node[n]['d'] + 1
q.put(m)
g.edge[n][m]['color'] = sn.edge_color # linha nova
snapshot(g, frames) # linha nova
g.node[n]['color'] = blue # linha nova
snapshot(g, frames) # linha nova
sn.show_animation(frames)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A seguir, vamos configurar as propriedades visuais
Step2: Por fim, vamos carregar e visualizar um grafo
Step3: Caminhos de comprimento mínimo
Step4: Visualizando algoritmos
Step5: Vamos agora escrever uma versão alternativa da busca em largura.
|
13,375
|
<ASSISTANT_TASK:>
Python Code:
# sphinx_gallery_thumbnail_number = 2
# Authors: Eric Larson <larson.eric.d@gmail.com>
# Sheraz Khan <sheraz@khansheraz.com>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.connectivity import envelope_correlation
from mne.minimum_norm import make_inverse_operator, apply_inverse_epochs
from mne.preprocessing import compute_proj_ecg, compute_proj_eog
data_path = mne.datasets.brainstorm.bst_resting.data_path()
subjects_dir = op.join(data_path, 'subjects')
subject = 'bst_resting'
trans = op.join(data_path, 'MEG', 'bst_resting', 'bst_resting-trans.fif')
src = op.join(subjects_dir, subject, 'bem', subject + '-oct-6-src.fif')
bem = op.join(subjects_dir, subject, 'bem', subject + '-5120-bem-sol.fif')
raw_fname = op.join(data_path, 'MEG', 'bst_resting',
'subj002_spontaneous_20111102_01_AUX.ds')
raw = mne.io.read_raw_ctf(raw_fname, verbose='error')
raw.crop(0, 60).load_data().pick_types(meg=True, eeg=False).resample(80)
raw.apply_gradient_compensation(3)
projs_ecg, _ = compute_proj_ecg(raw, n_grad=1, n_mag=2)
projs_eog, _ = compute_proj_eog(raw, n_grad=1, n_mag=2, ch_name='MLT31-4407')
raw.info['projs'] += projs_ecg
raw.info['projs'] += projs_eog
raw.apply_proj()
cov = mne.compute_raw_covariance(raw) # compute before band-pass of interest
raw.filter(14, 30)
events = mne.make_fixed_length_events(raw, duration=5.)
epochs = mne.Epochs(raw, events=events, tmin=0, tmax=5.,
baseline=None, reject=dict(mag=8e-13), preload=True)
del raw
src = mne.read_source_spaces(src)
fwd = mne.make_forward_solution(epochs.info, trans, src, bem)
inv = make_inverse_operator(epochs.info, fwd, cov)
del fwd, src
labels = mne.read_labels_from_annot(subject, 'aparc_sub',
subjects_dir=subjects_dir)
epochs.apply_hilbert() # faster to apply in sensor space
stcs = apply_inverse_epochs(epochs, inv, lambda2=1. / 9., pick_ori='normal',
return_generator=True)
label_ts = mne.extract_label_time_course(
stcs, labels, inv['src'], return_generator=True)
corr = envelope_correlation(label_ts, verbose=True)
# let's plot this matrix
fig, ax = plt.subplots(figsize=(4, 4))
ax.imshow(corr, cmap='viridis', clim=np.percentile(corr, [5, 95]))
fig.tight_layout()
threshold_prop = 0.15 # percentage of strongest edges to keep in the graph
degree = mne.connectivity.degree(corr, threshold_prop=threshold_prop)
stc = mne.labels_to_stc(labels, degree)
stc = stc.in_label(mne.Label(inv['src'][0]['vertno'], hemi='lh') +
mne.Label(inv['src'][1]['vertno'], hemi='rh'))
brain = stc.plot(
clim=dict(kind='percent', lims=[75, 85, 95]), colormap='gnuplot',
subjects_dir=subjects_dir, views='dorsal', hemi='both',
smoothing_steps=25, time_label='Beta band')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we do some things in the name of speed, such as crop (which will
Step2: Now we band-pass filter our data and create epochs.
Step3: Compute the forward and inverse
Step4: Compute label time series and do envelope correlation
Step5: Compute the degree and plot it
|
13,376
|
<ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation
import collections
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = collections.defaultdict(list)
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
print('Evaluating k={0}'.format(k))
for fold in range(num_folds):
X_validation = X_train_folds[fold]
y_validation = y_train_folds[fold]
temp_X_train = np.vstack(X_train_folds[:fold] + X_train_folds[fold+1:])
temp_y_train = np.vstack(y_train_folds[:fold] + y_train_folds[fold+1:]).flatten()
classifier.train(temp_X_train, temp_y_train)
y_test_pred = classifier.predict(X_validation, k=k)
k_to_accuracies[k].append(float(np.sum(y_test_pred == y_validation)) / num_test)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
|
13,377
|
<ASSISTANT_TASK:>
Python Code:
[x*x for x in range(3)]
(x*x for x in range(3))
g = (x*x for x in range(3))
next(g)
next(g)
next(g)
next(g)
for i in g:
print(i, end=", ")
g = (x*x for x in range(3))
for i in g:
print(i, end=", ")
list(x*x for x in range(3))
def eager_updown(n):
xs = []
for i in range(n):
xs.append(i)
for i in range(n, -1, -1):
xs.append(i)
return xs
eager_updown(3)
def lazy_updown(n):
for i in range(n):
yield i
for i in range(n, -1, -1):
yield i
lazy_updown(3)
list(lazy_updown(3))
def pure(alist):
return [x*x for x in alist]
def impure(alist):
for i in range(len(alist)):
alist[i] = alist[i]*alist[i]
return alist
xs = [1,2,3]
ys = pure(xs)
print(xs, ys)
ys = impure(xs)
print(xs, ys)
def f1(n):
return n//2 if n % 2==0 else n*3+1
def f2(n):
return np.random.random(n)
def f3(n):
n = 23
return n
def f4(a, n=[]):
n.append(a)
return n
list(map(f1, range(10)))
list(filter(lambda x: x % 2 == 0, range(10)))
from functools import reduce
reduce(lambda x, y: x + y, range(10), 0)
reduce(lambda x, y: x + y, [[1,2], [3,4], [5,6]], [])
import operator as op
reduce(op.mul, range(1, 6), 1)
list(map(op.itemgetter(1), [[1,2,3],[4,5,6],[7,8,9]]))
import itertools as it
list(it.combinations(range(1,6), 3))
list(it.product([0,1], repeat=3))
list(it.starmap(op.add, zip(range(5), range(5))))
list(it.takewhile(lambda x: x < 3, range(10)))
data = sorted('the quick brown fox jumps over the lazy dog'.split(), key=len)
for k, g in it.groupby(data, key=len):
print(k, list(g))
import toolz as tz
list(tz.partition(3, range(10)))
list(tz.partition(3, range(10), pad=None))
n = 30
dna = ''.join(np.random.choice(list('ACTG'), n))
dna
tz.frequencies(tz.sliding_window(2, dna))
from toolz import curried as c
tz.pipe(
dna,
c.sliding_window(2), # using curry
c.frequencies,
)
composed = tz.compose(
c.frequencies,
c.sliding_window(2),
)
composed(dna)
m = 10000
n = 300
dnas = (''.join(np.random.choice(list('ACTG'), n, p=[.1, .2, .3, .4]))
for i in range(m))
dnas
tz.merge_with(sum,
tz.map(
composed,
dnas
)
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A generator expression is lazy.
Step2: You can use generators as iterators.
Step3: A generator is single use.
Step4: The list constructor forces evaluation of the generator.
Step5: An eager function.
Step6: A lazy generator.
Step7: Pure and impure functions
Step8: An impure function has side effects.
Step9: Quiz
Step10: Higher order functions
Step11: Using the operator module
Step12: Using itertools
Step13: Generate all Boolean combinations
Step14: Using toolz
Step15: Using pipes and the curried namespace
Step16: Processing many sets of DNA strings without reading into memory
|
13,378
|
<ASSISTANT_TASK:>
Python Code:
from lsst.cwfs.instrument import Instrument
from lsst.cwfs.algorithm import Algorithm
from lsst.cwfs.image import Image, readFile
import lsst.cwfs.plots as plots
ff = np.loadtxt('../tests/testImages/FAM/ccdCenter189.txt')
f25 = np.zeros((25,4))
ii = 0
for i in range(ff.shape[0]):
if (ff[i,3] == 11
or (ff[i,2] == 14 and ff[i,3] == 12)
or (ff[i,2] == 10 and ff[i,3] == 10)
or (ff[i,2] == 34 and ff[i,3] == 12)
or (ff[i,2] == 30 and ff[i,3] == 10)):
f25[ii,:] = ff[i,:]
ii+=1
print(ii)
nField = ii
rerun = 0
x0 = np.arange(4,23)
if rerun:
testResults = np.zeros((nField, 19))
else:
testResults = np.loadtxt('famTestResults.txt')
fig, ax = plt.subplots(5,5, figsize=(20,20))
for iField in range(nField):
fieldXY = [f25[iField, 0], f25[iField,1]]
fieldR = np.sqrt(sum(np.array(fieldXY)**2))
det = 'R%02d_S%02d'%(f25[iField, 2], f25[iField,3])
print('%2d, %s, fieldX = %+.2f, fieldY = %+.2f, fieldR = %.2f, WFS fieldR = %.2f'%(
iField, det, fieldXY[0], fieldXY[1], fieldR, 1.176*1.414))
I1 = Image(readFile('../tests/testImages/FAM/Image_intra_%s.txt'%det),
fieldXY, Image.INTRA)
I2 = Image(readFile('../tests/testImages/FAM/Image_extra_%s.txt'%det),
fieldXY, Image.EXTRA)
inst=Instrument('lsstfam',I1.sizeinPix)
algo=Algorithm('exp',inst,0)
if rerun:
algo.runIt(inst,I1,I2,'offAxis')
else:
algo.zer4UpNm = testResults[iField, :]
#truth from Zemax, in wave, wavelength = 500nm
zTrue = np.loadtxt('../tests/testImages/FAM/zn_%s.txt'%det )
irow = np.int8(iField/5)
icol = np.mod(iField,5)
ax[irow][icol].plot(x0, algo.zer4UpNm,'-ro', label='measured')
ax[irow][icol].plot(x0, zTrue[3:22]*500,'-bo', label = 'truth') #wavelength = 500nm
ax[irow][icol].grid()
ax[irow][icol].legend()
ax[irow][icol].set_title(det)
if rerun:
testResults[iField,:] = algo.zer4UpNm
if rerun:
np.savetxt('famTestResults.txt', testResults)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We use 25 fields for this test
Step2: distribution of the test fields
|
13,379
|
<ASSISTANT_TASK:>
Python Code:
import slater
print(slater.__doc__)
print(slater.AOType.all_shells)
print(slater.AOType.s)
p_type = slater.AOType.from_string("p")
print(p_type)
f_type = slater.AOType.from_int(3)
print(f_type, " l = ", f_type.l)
AO_2s = slater.AO(n=2, aoType=slater.AOType.s, occ=1)
print(AO_2s)
print("n = ", AO_2s.n, "\nl = ", AO_2s.l)
print(AO_2s.name)
print("degeneracy (2l + 1) = ", AO_2s.degeneracy)
print(AO_2s.occ)
OA_3d = slater.AO.from_string("3d")
print("OA : ", OA_3d.name, "\nn = ", OA_3d.n, "\nl = ", OA_3d.l, "\ndeg = ", OA_3d.degeneracy)
OA_4p = slater.AO(4, "p")
print(OA_4p)
OA_3s = slater.AO(3, 0)
print(OA_3s)
k = slater.Klechkowski()
print(k)
Na = slater.ElectronicConf(nelec=11)
print(Na)
Ca = slater.ElectronicConf.from_string("1s^2 2s^2 2p^6 3s^2 3p^6 4s^2")
print(Ca)
print(Na.toTex())
print(Na.valence)
data = Na.computeEnergy()
print(data[slater.AO.from_string("2p")])
sigma, e = data[slater.AO.from_string("3s")]
print("sigma = ", sigma, "\ne = ", e, "eV")
print("E = ", Na.energy)
print("Na :", Na)
print("q :", Na.q, " Z = ", Na.Z)
Na_p = Na.ionize(1)
print("Na + :", Na_p)
print("q :", Na_p.q, " Z = ", Na_p.Z)
Cl = slater.ElectronicConf(nelec=17)
print("Cl :", Cl)
Cl_m = Cl.ionize(-1)
print("Cl- :", Cl_m)
V = slater.ElectronicConf(nelec=23)
print("V :", V)
for i in [1, 2, 3]:
ion = V.ionize(i)
print("V{}+ :".format(ion.q), ion)
Cr = slater.ElectronicConf(nelec=24)
print(Cr)
Cr_exc = slater.ElectronicConf.from_string("1s^2 2s^2 2p^6 3s^2 3p^6 4s^1 3d^5")
print(Cr_exc)
d = Cr.computeEnergy()
d_exc = Cr_exc.computeEnergy()
Cr.energy < Cr_exc.energy
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Atomic orbitals
Step2: The atomic orbital class
Step3: An occupency can be set to the shell.
Step4: You can define the AO from a usual string.
Step5: Other ways to define the AO
Step6: The Klechkowski class
Step7: The ElectronicConf class
Step8: Or you can give your own electronic configuration.
Step9: You can export the configuration in as a latex formula.
Step10: You can print valence electrons
Step11: Compute energies using slater's rule
Step12: Energy of the configuration
Step13: Play with ions
Step14: More complicated (4s / 3d inversion)
Step15: Example of Chromium exception
|
13,380
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# clear old variables
tf.reset_default_graph()
# setup input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
def simple_model(X,y):
# define our weights (e.g. init_two_layer_convnet)
# setup variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
W1 = tf.get_variable("W1", shape=[5408, 10])
b1 = tf.get_variable("b1", shape=[10])
# define our graph (e.g. two_layer_convnet)
a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1
h1 = tf.nn.relu(a1)
h1_flat = tf.reshape(h1,[-1,5408])
y_out = tf.matmul(h1_flat,W1) + b1
return y_out
y_out = simple_model(X,y)
# define our loss
total_loss = tf.losses.hinge_loss(tf.one_hot(y,10),logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
# define our optimizer
optimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate
train_step = optimizer.minimize(mean_loss)
def run_model(session, predict, loss_val, Xd, yd,
epochs=1, batch_size=64, print_every=100,
training=None, plot_losses=False):
# have tensorflow compute accuracy
correct_prediction = tf.equal(tf.argmax(predict,1), y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# shuffle indicies
train_indicies = np.arange(Xd.shape[0])
np.random.shuffle(train_indicies)
training_now = training is not None
# setting up variables we want to compute (and optimizing)
# if we have a training function, add that to things we compute
variables = [mean_loss,correct_prediction,accuracy]
if training_now:
variables[-1] = training
# counter
iter_cnt = 0
for e in range(epochs):
# keep track of losses and accuracy
correct = 0
losses = []
# make sure we iterate over the dataset once
for i in range(int(math.ceil(Xd.shape[0]/batch_size))):
# generate indicies for the batch
start_idx = (i*batch_size)%X_train.shape[0]
idx = train_indicies[start_idx:start_idx+batch_size]
# create a feed dictionary for this batch
feed_dict = {X: Xd[idx,:],
y: yd[idx],
is_training: training_now }
# get batch size
actual_batch_size = yd[i:i+batch_size].shape[0]
# have tensorflow compute loss and correct predictions
# and (if given) perform a training step
loss, corr, _ = session.run(variables,feed_dict=feed_dict)
# aggregate performance stats
losses.append(loss*actual_batch_size)
correct += np.sum(corr)
# print every now and then
if training_now and (iter_cnt % print_every) == 0:
print("Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}"\
.format(iter_cnt,loss,np.sum(corr)/actual_batch_size))
iter_cnt += 1
total_correct = correct/Xd.shape[0]
total_loss = np.sum(losses)/Xd.shape[0]
print("Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}"\
.format(total_loss,total_correct,e+1))
if plot_losses:
plt.plot(losses)
plt.grid(True)
plt.title('Epoch {} Loss'.format(e+1))
plt.xlabel('minibatch number')
plt.ylabel('minibatch loss')
plt.show()
return total_loss,total_correct
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# clear old variables
tf.reset_default_graph()
# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
# define model
def complex_model(X,y,is_training):
Wconv = tf.get_variable('Wconv1', shape=[7, 7, 3, 32])
bconv = tf.get_variable('bconv1', shape=[32])
W1 = tf.get_variable('W1', shape=[5408, 1024])
b1 = tf.get_variable('b1', shape=[1024])
W2 = tf.get_variable('W2', shape=[1024, 10])
b2 = tf.get_variable('b2', shape=[10])
# 7x7 Convolutional Layer with 32 filters and stride of 1
conv = tf.nn.conv2d(X, Wconv, strides=[1, 1, 1, 1], padding='VALID') + bconv
# ReLU Activation Layer
conv_relu = tf.nn.relu(conv)
# Spatial Batch Normalization Layer (trainable parameters, with scale and centering)
conv_batch = tf.contrib.layers.batch_norm(
conv_relu, center=True, trainable=True, scale=True, epsilon=1e-7,
is_training=is_training)
# 2x2 Max Pooling layer with a stride of 2
conv_pool = tf.nn.max_pool(
conv_batch, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
pool_flat = tf.reshape(conv_pool, [-1, 5408])
# Affine layer with 1024 output units
fc_hidden = tf.matmul(pool_flat, W1) + b1
# ReLU Activation Layer
fc_hidden_relu = tf.nn.relu(fc_hidden)
# Affine layer from 1024 input units to 10 outputs
y_out = tf.matmul(fc_hidden_relu, W2) + b2
return y_out
y_out = complex_model(X,y,is_training)
# Now we're going to feed a random batch into the model
# and make sure the output is the right size
x = np.random.randn(64, 32, 32,3)
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
print(ans.shape)
print(np.array_equal(ans.shape, np.array([64, 10])))
try:
with tf.Session() as sess:
with tf.device("/gpu:0") as dev: #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
except tf.errors.InvalidArgumentError:
print("no gpu found, please use Google Cloud if you want GPU acceleration")
# rebuild the graph
# trying to start a GPU throws an exception
# and also trashes the original graph
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = complex_model(X,y,is_training)
# Inputs
# y_out: is what your model computes
# y: is your TensorFlow variable with label information
# Outputs
# mean_loss: a TensorFlow variable (scalar) with numerical loss
# optimizer: a TensorFlow optimizer
# This should be ~3 lines of code!
mean_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.one_hot(y, 10), logits=y_out))
optimizer = tf.train.RMSPropOptimizer(learning_rate=1e-3)
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# Feel free to play with this cell
CONVOLUTION_LAYERS = [
{
'conv_kernel_size_1': [3, 3],
'conv_filters_1': 32,
'conv_kernel_size_2': [3, 3],
'conv_filters_2': 32,
'pool_size': [2, 2],
'pool_strides': [2, 2]
},
{
'conv_kernel_size_1': [3, 3],
'conv_filters_1': 64,
'conv_kernel_size_2': [3, 3],
'conv_filters_2': 64,
'pool_size': [2, 2],
'pool_strides': [2, 2]
}
]
def conv_layer(x, id,
conv1_kernel_size,
conv1_filters, conv2_kernel_size, conv2_filters, pool_size,
pool_strides, is_training=False):
with tf.variable_scope('conv_' + str(id)):
conv1 = tf.layers.conv2d(
inputs=x,
filters=conv1_filters,
kernel_size=conv1_kernel_size,
padding='same',
activation=tf.nn.relu)
conv_batch = tf.contrib.layers.batch_norm(
conv1, center=True, trainable=True, scale=True, epsilon=1e-7,
is_training=is_training)
conv2 = tf.layers.conv2d(
inputs=conv_batch,
filters=conv2_filters,
kernel_size=conv2_kernel_size,
padding='same',
activation=tf.nn.relu)
conv2_batch = tf.contrib.layers.batch_norm(
conv2, center=True, trainable=True, scale=True, epsilon=1e-7,
is_training=is_training)
pool = tf.layers.max_pooling2d(
inputs=conv2_batch, pool_size=pool_size, strides=pool_strides)
return pool
def my_model(X, y, is_training=False, fc_size=1024, convolution_layers=CONVOLUTION_LAYERS):
current_layer = X
for i, layer_desc in enumerate(CONVOLUTION_LAYERS):
current_layer = conv_layer(
current_layer, i, layer_desc['conv_kernel_size_1'], layer_desc['conv_filters_1'],
layer_desc['conv_kernel_size_2'], layer_desc['conv_filters_2'],
layer_desc['pool_size'], layer_desc['pool_strides'], is_training=is_training)
flatten_layer = tf.contrib.layers.flatten(inputs=current_layer)
with tf.variable_scope('fc_1'):
fc_layer = tf.layers.dense(inputs=flatten_layer, units=fc_size, activation=tf.nn.relu)
dropout_layer = tf.layers.dropout(
inputs=fc_layer, rate=0.5, training=is_training)
logits = tf.layers.dense(inputs=dropout_layer, units=10)
return logits
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = my_model(X,y,is_training=is_training)
mean_loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf.one_hot(y, 10), logits=y_out))
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3)
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
# Feel free to play with this cell
# This default code creates a session
# and trains your model for 10 epochs
# then prints the validation set accuracy
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,10,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# Test your model here, and make sure
# the output of this cell is the accuracy
# of your best model on the training and val sets
# We're looking for >= 70% accuracy on Validation
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
print('Test')
run_model(sess,y_out,mean_loss,X_test,y_test,1,64)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What's this TensorFlow business?
Step2: Example Model
Step3: TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful).
Step4: Training a specific model
Step5: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes)
Step6: You should see the following from the run above
Step7: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.
Step8: Train the model
Step9: Check the accuracy of the model.
Step10: Train a great model on CIFAR-10!
Step11: Describe what you did here
|
13,381
|
<ASSISTANT_TASK:>
Python Code:
import random
import numpy as np
import matplotlib.pyplot as plt
import quantities as pq
import neo
import elephant.unitary_event_analysis as ue
# Fix random seed to guarantee fixed output
random.seed(1224)
# Download data
!curl https://web.gin.g-node.org/INM-6/elephant-data/raw/master/dataset-1/dataset-1.h5 --output dataset-1.h5 --location
def plot_UE(data,Js_dict,Js_sig,bin_size,winsize,winstep, pat,N,t_winpos,**kwargs):
Examples:
---------
dict_args = {'events':{'SO':[100*pq.ms]},
'save_fig': True,
'path_filename_format':'UE1.pdf',
'showfig':True,
'suptitle':True,
'figsize':(12,10),
'unit_ids':[10, 19, 20],
'ch_ids':[1,3,4],
'fontsize':15,
'linewidth':2,
'set_xticks' :False'}
'marker_size':8,
import matplotlib.pylab as plt
t_start = data[0][0].t_start
t_stop = data[0][0].t_stop
arg_dict = {'events':{},'figsize':(12,10), 'top':0.9, 'bottom':0.05, 'right':0.95,'left':0.1,
'hspace':0.5,'wspace':0.5,'fontsize':15,'unit_ids':range(1,N+1,1),
'ch_real_ids':[],'showfig':False, 'lw':2,'S_ylim':[-3,3],
'marker_size':8, 'suptitle':False, 'set_xticks':False,
'save_fig':False,'path_filename_format':'UE.pdf'}
arg_dict.update(kwargs)
num_tr = len(data)
unit_real_ids = arg_dict['unit_ids']
num_row = 5
num_col = 1
ls = '-'
alpha = 0.5
plt.figure(1,figsize = arg_dict['figsize'])
if arg_dict['suptitle'] == True:
plt.suptitle("Spike Pattern:"+ str((pat.T)[0]),fontsize = 20)
print('plotting UEs ...')
plt.subplots_adjust(top=arg_dict['top'], right=arg_dict['right'], left=arg_dict['left']
, bottom=arg_dict['bottom'], hspace=arg_dict['hspace'] , wspace=arg_dict['wspace'])
ax = plt.subplot(num_row,1,1)
ax.set_title('Unitary Events',fontsize = arg_dict['fontsize'],color = 'r')
for n in range(N):
for tr,data_tr in enumerate(data):
plt.plot(data_tr[n].rescale('ms').magnitude, np.ones_like(data_tr[n].magnitude)*tr + n*(num_tr + 1) + 1, '.', markersize=0.5,color = 'k')
sig_idx_win = np.where(Js_dict['Js']>= Js_sig)[0]
if len(sig_idx_win)>0:
x = np.unique(Js_dict['indices']['trial'+str(tr)])
if len(x)>0:
xx = []
for j in sig_idx_win:
xx =np.append(xx,x[np.where((x*bin_size>=t_winpos[j]) &(x*bin_size<t_winpos[j] + winsize))])
plt.plot(
np.unique(xx)*bin_size, np.ones_like(np.unique(xx))*tr + n*(num_tr + 1) + 1,
ms=arg_dict['marker_size'], marker = 's', ls = '',mfc='none', mec='r')
plt.axhline((tr + 2)*(n+1) ,lw = 2, color = 'k')
y_ticks_pos = np.arange(num_tr/2 + 1,N*(num_tr+1), num_tr+1)
plt.yticks(y_ticks_pos)
plt.gca().set_yticklabels(unit_real_ids,fontsize = arg_dict['fontsize'])
for ch_cnt, ch_id in enumerate(arg_dict['ch_real_ids']):
print(ch_id)
plt.gca().text((max(t_winpos) + winsize).rescale('ms').magnitude,
y_ticks_pos[ch_cnt],'CH-'+str(ch_id),fontsize = arg_dict['fontsize'])
plt.ylim(0, (tr + 2)*(n+1) + 1)
plt.xlim(0, (max(t_winpos) + winsize).rescale('ms').magnitude)
plt.xticks([])
plt.ylabel('Unit ID',fontsize = arg_dict['fontsize'])
for key in arg_dict['events'].keys():
for e_val in arg_dict['events'][key]:
plt.axvline(e_val,ls = ls,color = 'r',lw = 2,alpha = alpha)
if arg_dict['set_xticks'] == False:
plt.xticks([])
print('plotting Raw Coincidences ...')
ax1 = plt.subplot(num_row,1,2,sharex = ax)
ax1.set_title('Raw Coincidences',fontsize = 20,color = 'c')
for n in range(N):
for tr,data_tr in enumerate(data):
plt.plot(data_tr[n].rescale('ms').magnitude,
np.ones_like(data_tr[n].magnitude)*tr + n*(num_tr + 1) + 1,
'.', markersize=0.5, color = 'k')
plt.plot(
np.unique(Js_dict['indices']['trial'+str(tr)])*bin_size,
np.ones_like(np.unique(Js_dict['indices']['trial'+str(tr)]))*tr + n*(num_tr + 1) + 1,
ls = '',ms=arg_dict['marker_size'], marker = 's', markerfacecolor='none', markeredgecolor='c')
plt.axhline((tr + 2)*(n+1) ,lw = 2, color = 'k')
plt.ylim(0, (tr + 2)*(n+1) + 1)
plt.yticks(np.arange(num_tr/2 + 1,N*(num_tr+1), num_tr+1))
plt.gca().set_yticklabels(unit_real_ids,fontsize = arg_dict['fontsize'])
plt.xlim(0, (max(t_winpos) + winsize).rescale('ms').magnitude)
plt.xticks([])
plt.ylabel('Unit ID',fontsize = arg_dict['fontsize'])
for key in arg_dict['events'].keys():
for e_val in arg_dict['events'][key]:
plt.axvline(e_val,ls = ls,color = 'r',lw = 2,alpha = alpha)
print('plotting PSTH ...')
plt.subplot(num_row,1,3,sharex=ax)
#max_val_psth = 0.*pq.Hz
for n in range(N):
plt.plot(t_winpos + winsize/2.,Js_dict['rate_avg'][:,n].rescale('Hz'),
label = 'unit '+str(arg_dict['unit_ids'][n]),lw = arg_dict['lw'])
plt.ylabel('Rate [Hz]',fontsize = arg_dict['fontsize'])
plt.xlim(0, (max(t_winpos) + winsize).rescale('ms').magnitude)
max_val_psth = plt.gca().get_ylim()[1]
plt.ylim(0, max_val_psth)
plt.yticks([0, int(max_val_psth/2),int(max_val_psth)],fontsize = arg_dict['fontsize'])
plt.legend(bbox_to_anchor=(1.12, 1.05), fancybox=True, shadow=True)
for key in arg_dict['events'].keys():
for e_val in arg_dict['events'][key]:
plt.axvline(e_val,ls = ls,color = 'r',lw = arg_dict['lw'],alpha = alpha)
if arg_dict['set_xticks'] == False:
plt.xticks([])
print( 'plotting emp. and exp. coincidences rate ...')
plt.subplot(num_row,1,4,sharex=ax)
plt.plot(t_winpos + winsize/2.,Js_dict['n_emp'],label = 'empirical',lw = arg_dict['lw'],color = 'c')
plt.plot(t_winpos + winsize/2.,Js_dict['n_exp'],label = 'expected',lw = arg_dict['lw'],color = 'm')
plt.xlim(0, (max(t_winpos) + winsize).rescale('ms').magnitude)
plt.ylabel('# Coinc.',fontsize = arg_dict['fontsize'])
plt.legend(bbox_to_anchor=(1.12, 1.05), fancybox=True, shadow=True)
YTicks = plt.ylim(0,int(max(max(Js_dict['n_emp']), max(Js_dict['n_exp']))))
plt.yticks([0,YTicks[1]],fontsize = arg_dict['fontsize'])
for key in arg_dict['events'].keys():
for e_val in arg_dict['events'][key]:
plt.axvline(e_val,ls = ls,color = 'r',lw = 2,alpha = alpha)
if arg_dict['set_xticks'] == False:
plt.xticks([])
print('plotting Surprise ...')
plt.subplot(num_row,1,5,sharex=ax)
plt.plot(t_winpos + winsize/2., Js_dict['Js'],lw = arg_dict['lw'],color = 'k')
plt.xlim(0, (max(t_winpos) + winsize).rescale('ms').magnitude)
plt.axhline(Js_sig,ls = '-', color = 'gray')
plt.axhline(-Js_sig,ls = '-', color = 'gray')
plt.xticks(t_winpos.magnitude[::int(len(t_winpos)/10)])
plt.yticks([-2,0,2],fontsize = arg_dict['fontsize'])
plt.ylabel('S',fontsize = arg_dict['fontsize'])
plt.xlabel('Time [ms]', fontsize = arg_dict['fontsize'])
plt.ylim(arg_dict['S_ylim'])
for key in arg_dict['events'].keys():
for e_val in arg_dict['events'][key]:
plt.axvline(e_val,ls = ls,color = 'r',lw = arg_dict['lw'],alpha = alpha)
plt.gca().text(e_val - 10*pq.ms,2*arg_dict['S_ylim'][0],key,fontsize = arg_dict['fontsize'],color = 'r')
if arg_dict['set_xticks'] == False:
plt.xticks([])
if arg_dict['save_fig'] == True:
plt.savefig(arg_dict['path_filename_format'])
if arg_dict['showfig'] == False:
plt.cla()
plt.close()
if arg_dict['showfig'] == True:
plt.show()
block = neo.io.NeoHdf5IO("./dataset-1.h5")
sts1 = block.read_block().segments[0].spiketrains
sts2 = block.read_block().segments[1].spiketrains
spiketrains = np.vstack((sts1,sts2)).T
UE = ue.jointJ_window_analysis(
spiketrains, bin_size=5*pq.ms, winsize=100*pq.ms, winstep=10*pq.ms, pattern_hash=[3])
plot_UE(
spiketrains, UE, ue.jointJ(0.05),bin_size=5*pq.ms,winsize=100*pq.ms,winstep=10*pq.ms,
pat=ue.inverse_hash_from_pattern([3], N=2), N=2,
t_winpos=ue._winpos(0*pq.ms,spiketrains[0][0].t_stop,winsize=100*pq.ms,winstep=10*pq.ms))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, we download a data file containing spike train data from multiple trials of two neurons.
Step3: Write a plotting function
Step4: Load data and extract spiketrains
Step5: Calculate Unitary Events
|
13,382
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Import Statements
%pylab inline
import time
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import tensorflow as tf
import seaborn as sns
import scipy
tf.enable_eager_execution(config=tf.ConfigProto(log_device_placement=True))
tf.logging.set_verbosity(tf.logging.DEBUG)
tfe = tf.contrib.eager
sns.set_context('notebook', font_scale=1.25)
sns.set_style('whitegrid')
def L_goodfellow_saddle(x):
Function given in Goodfellow et al. Accepts a np.array of dim 2.
return (1-x[0]*x[1]) ** 2
#@title Run this cell to initialize the library of tools being used in this notebook.
def plot_contours(
L_fn,
ylims=(-2., 2.),
xlims=(-2., 2.),
nx=10,
ny=10,
ax=None,
show_contour_lines=False,
**plot_kwargs):
Plots the contours of the function in 2D space.
Args:
L_fn: The loss function that accepts a np.ndarray of dim 2.
ylims: A tuple of floats containing the limits on the y-axis.
xlims: A tuple of floats containing the limits on the x-axis.
nx: The integer number of points from the x-domain.
ny: The integer number of points from the y-domain.
ax: A matplotlib.axes instance to do plots on.
**plot_kwargs: Other arguments that will be passed onto the plotter.
Returns:
ax: A matplotlib.axes instance with the figure plotted.
J: A np.ndarray of shape (nx*ny, ) with evaluations of the function.
xy_vectors: A np.ndarray of shape (nx*ny, 2) of the evaluated points.
if ax is None: ax = plt.figure().add_subplot(111)
# Get points to evaluat the function at.
x = np.linspace(*xlims, num=nx)
y = np.linspace(*ylims, num=ny)
X, Y = np.meshgrid(x, y) # Mesh grid for combinations.
xy_vectors = np.stack([X, Y], axis=2).reshape(-1, 2) # Reshape into a batch.
# Batch apply the function:
J = np.apply_along_axis(L_fn, 1, xy_vectors).reshape(nx, ny)
cs = ax.contourf(x, y, J, **plot_kwargs) # Plot!
if show_contour_lines: ax.contour(cs, colors='gray')
if show_contour_lines: ax.clabel(cs, inline=True, fontsize=10, colors='gray')
return ax, J, xy_vectors
# TODO(zaf): See if there are tools built into tensorflow that does this.
def get_flat_params(parameters):
Returns flattened model parameters.
Given a list of tensorflow variables, this returns a numpy array
containing a flat representation of all the parameters.
Only works in eager mode.
Args:
parameters: The iterable containing the tf.Variable objects.
Returns:
A numpy array containing the parameters.
params = []
for param in parameters:
params.append(param.numpy().reshape(-1))
return np.concatenate(params)
def set_flat_params(model, flat_params, trainable_only=True):
Set model parameters with a linear numpy array.
Takes a flat tensor containing parameters and sets the model with
those parameters.
Args:
model: The tf.keras.Model object to set the params of.
flat_params: The flattened contiguous 1D numpy array containing
the parameters to set.
trainable_only: Set only the trainable parameters.
Returns:
The keras model from `model` but with the parameters set to `flat_params`.
idx = 0
if trainable_only:
variables = model.trainable_variables
else:
variables = model.variables
for param in variables:
# This will be 1 if param.shape is empty, corresponding to a single value.
flat_size = int(np.prod(list(param.shape)))
flat_param_to_assign = flat_params[idx:idx + flat_size]
# Explicit check here because of: b/112443506
if len(param.shape): # pylint: disable=g-explicit-length-test
flat_param_to_assign = flat_param_to_assign.reshape(*param.shape)
else:
flat_param_to_assign = flat_param_to_assign[0]
param.assign(flat_param_to_assign)
idx += flat_size
return model
X_LABEL = r'$L(x_0+\alpha d)-L(x_0)$'
Y_LABEL = r'$L(x_0-\alpha d)-L(x_0)$'
# plt.figure(figsize=(5, 4))
# ax = plt.gca()
ax, _, _ = plot_contours(
L_goodfellow_saddle, nx=100, ny=100, cmap='viridis_r',
ylims=(-2, 2), xlims=(-2, 2),
levels=np.arange(-0.5, 5.0, 0.1).tolist(), ax=None)
ax.set_xlabel(r'$\theta[0]$')
ax.set_ylabel(r'$\theta[1]$')
ax.set_title(r'$L(\theta) = (1-\theta[0]\theta[1])^2$')
# plt.xlim(-2.5, 2.5)
# plt.ylim(-2.5, 2.5)
# plt.plot(x0, x1, linestyle='--', color='k')
plt.text(0, 0, 'Saddle', )
optima_manifold = np.linspace(-2, 2)
# ax.scatter(list(zip(optima_manifold, 1/optima_manifold)))
plt.text(-1.5, -1, 'Local Optimas')
plt.text(0.5, 1, 'Local Optimas')
x0 = np.array([-2, -0.5])
x1 = np.array([-0.5, -2.])
alphas = np.linspace(-0.5, 1.5, num=50)
L_vals = []
for alpha in alphas:
new_x = (1-alpha)*x0 + alpha*x1
L_vals.append(-L_goodfellow_saddle(new_x))
plt.figure(figsize=(5, 4))
plt.plot(alphas, L_vals)
plt.xlabel(r'Interpolation Coefficient, $\alpha$')
plt.ylabel(r'$\mathcal{L}((1-\alpha)\theta_0+\alpha\theta_1$)')
plt.xticks([0.0, 1.0])
plt.tight_layout()
# plt.savefig('demo_interpolation.pdf')
plt.figure(figsize=(5, 4))
ax = plt.gca()
ax, _, _ = plot_contours(
L_goodfellow_saddle, nx=100, ny=100, cmap='viridis_r',
ylims=(-2.25, 0.05), xlims=(-2.25, 0.05),
show_contour_lines=True,
levels=[0.0, 0.1, 1.0, 2.0, 5.0], ax=ax)
ax.set_xlabel(r'$\theta[0]$')
ax.set_ylabel(r'$\theta[1]$')
plt.xlim(-2.25, 0.05)
plt.ylim(-2.25, 0.05)
plt.plot(x0, x1, linestyle='--', color='k')
plt.text(x0[0], x0[1], r'$\theta_1$')
plt.text(x1[0], x1[1], r'$\theta_2$')
plt.tight_layout()
# plt.savefig('demo_curve.pdf')
def sample_directions(x_dim, num_samples=100):
Sample normalized random directions.
Args:
L_fn: A function that accepts a np.ndarray and returns the loss
as a float at that point.
x0: A np.ndarray representing the point around which to sample.
norm: The maximum norm of the movement direction.
num_samples: The number of samples to obtain.
Returns:
A np.ndarray of shape (num_samples, x_dim) such that the L2 norms are 1
along the x_dim.
random_directions = np.random.normal(size=(num_samples, x_dim))
random_directions /= np.linalg.norm(random_directions, axis=1).reshape(-1, 1)
return random_directions
def get_purturbed_directions(x0, step_size=1.0, num_samples=100):
Get perturbed parameters.
Args:
x0: A np.ndarray representing the central parameter to perturb.
step_size: A float representing the size of the step to move in.
num_samples: The integer number of samples to draw.
Returns:
Two np.ndarrays representing x0 perturbed by adding a random direction and
minusing it. They are paired so that they move by the same direction at each
index.
directions = sample_directions(x0.shape[0], num_samples)
forward_step_points = x0.reshape(1, -1) + step_size * directions
backward_step_points = x0.reshape(1, -1) - step_size * directions
return forward_step_points, backward_step_points
def get_sampled_loss_function(
L_fn, x0, step_size=1.0, num_samples=100, x0_samples=1, return_points=False):
Sample the loss function around the perturbations.
Args:
L_fn: A callable function that takes a np.ndarray representing parameters
and returns the loss.
x0: A np.ndarray representing the central parameter to perturb.
step_size: A float representing the size of the step to move in.
num_samples: The integer number of samples to draw.
x0_samples: The integer number of times to sample x0 (default is 1). Set > 1
if the loss function is stochastic.
forward_step_points, backward_step_points = get_purturbed_directions(
x0, step_size, num_samples)
if x0_samples == 1:
L_eval = L_fn(x0)
else:
L_eval = np.mean([L_fn(x0) for _ in range(x0_samples)])
L_forward_eval = np.apply_along_axis(L_fn, 1, forward_step_points) - L_eval
L_backward_eval = np.apply_along_axis(L_fn, 1, backward_step_points) - L_eval
if return_points:
return (
L_forward_eval,
L_backward_eval,
forward_step_points,
backward_step_points)
else:
return L_forward_eval, L_backward_eval
#######
## Define some simple loss functions for exposition.
#######
def L_quad(x):
Purely quadratic function.
return - x[0]**2 - 2.*x[1]**2
def L_flat_quad(x):
A quadratic function with one direction weighted with 0.
return -x[0]**2 - 0.*x[1]**2
def L_saddle(x):
A function with a saddle point
return -2*x[0]**2 + 2*x[1]**2
def L_linear(x):
A linear function.
return -2*x[0] + 2*x[1]
plt.figure(figsize=(8, 5))
plt.subplot(121)
forward_samples, backward_samples = get_sampled_loss_function(
L_quad, np.array([0.0, 0.0]), step_size=0.1)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.gca().set_aspect('equal')
plt.title(r'Strict local optimum')
plt.gca().set_xlim(-0.04, 0.04)
plt.gca().set_ylim(-0.04, 0.04)
plt.subplot(122)
forward_samples, backward_samples = get_sampled_loss_function(
L_flat_quad, np.array([0.0, 0.0]), step_size=0.1)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.gca().set_aspect('equal')
plt.title(r'Flat local optimum')
plt.gca().set_xlim(-0.04, 0.04)
plt.gca().set_ylim(-0.04, 0.04)
plt.tight_layout()
# plt.savefig('LM_scatter_comparisons.pdf')
plt.figure(figsize=(10, 6))
plt.subplot(131)
forward_samples, backward_samples = get_sampled_loss_function(
L_saddle, np.array([0.0, 0.0]), step_size=0.1)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.gca().set_aspect('equal')
plt.title(r'Saddle point')
plt.gca().set_xlim(-0.04, 0.04)
plt.gca().set_ylim(-0.04, 0.04)
plt.subplot(132)
forward_samples, backward_samples = get_sampled_loss_function(
L_quad, np.array([0.1, 0.0]), step_size=0.1)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.gca().set_aspect('equal')
plt.title(r'Close to local optimum')
plt.gca().set_xlim(-0.04, 0.04)
plt.gca().set_ylim(-0.04, 0.04)
plt.subplot(133)
forward_samples, backward_samples = get_sampled_loss_function(
L_linear, np.array([0.1, 0.0]), step_size=0.1)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.gca().set_aspect('equal')
plt.title(r'Linear region')
# plt.gca().set_xlim(-0.04, 0.04)
# plt.gca().set_ylim(-0.04, 0.04)
plt.tight_layout()
plt.savefig('linear_scatter_comparisons.pdf')
plt.figure(figsize=(5, 4))
# plt.subplot(121)
forward_samples, backward_samples = get_sampled_loss_function(L_goodfellow_saddle, np.array([-0.5, -2.]), step_size=0.1)
plt.scatter(-forward_samples, -backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.plot(np.linspace(-0.05, 0.01), np.linspace(-0.05, 0.01), linestyle='--', color='k')
plt.plot(np.linspace(-0.005, 0.005), -np.linspace(-0.005, 0.005), linestyle=':', color='k')
plt.xlabel(r'$\mathcal{L}(\theta_0+\alpha d)-\mathcal{L}(\theta_0)$')
plt.ylabel(r'$\mathcal{L}(\theta_0-\alpha d)-\mathcal{L}(\theta_0)$')
plt.gca().set_aspect('equal')
plt.tight_layout()
# plt.title(r'$x_0=(-0.5, -2)$: Minima')
# plt.savefig('demo_scatter2.pdf')
# files.download('linear_scatter_comparisons.pdf')
# files.download('LM_scatter_comparisons.pdf')
plt.figure(figsize=(10, 4))
plt.subplot(121)
forward_samples, backward_samples = get_sampled_loss_function(L_goodfellow_saddle, np.array([-0.5, -2.]), step_size=0.25)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.title(r'$x_0=(-0.5, -2)$: Minima')
plt.subplot(122)
forward_samples, backward_samples = get_sampled_loss_function(L_goodfellow_saddle, np.array([0., 0.]), step_size=0.25)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.title(r'$x_0=(0.0, 0.0)$: Sadde Point')
plt.tight_layout()
# These axes recover the gradient and curvature spectrum when projecting the 2D
# scatter plot evaluations.
CURVATURE_AX = np.array([1, 1]) # x = y
GRADIENT_AX = np.array([1, -1]) # x = -y
def scalar_project(x, v):
Calculate the scalar projection of vector x onto vector v.
v_hat = v / np.linalg.norm(v)
return np.dot(v_hat, x)
def get_gradient_projection(values_centered):
Project 2D points onto the x=-y axis which gives gradient information.
return np.apply_along_axis(
lambda x: scalar_project(x, GRADIENT_AX), 1, values_centered)
def get_curvature_projection(values_centered):
Project 2D points onto the x=y axis which gives curvature information.
return np.apply_along_axis(
lambda x: scalar_project(x, CURVATURE_AX), 1, values_centered)
plt.figure(figsize=(5, 4))
plt.subplot(211)
forward_samples, backward_samples = get_sampled_loss_function(
L_basic, np.array([-0.5, -2.]), step_size=0.1,num_samples=1000)
projections = get_gradient_projection(np.array([forward_samples, backward_samples]).T)
plt.hist(projections, bins=50)
plt.xlabel('Gradient value')
plt.ylabel('Count')
plt.subplot(212)
projections = get_curvature_projection(np.array([forward_samples, backward_samples]).T)
plt.hist(projections, bins=50)
plt.xlabel('Curvature value')
plt.ylabel('Count')
plt.tight_layout()
plt.savefig('demo_spectra_joined.pdf')
plt.figure(figsize=(5, 4))
plt.subplot(211)
forward_samples, backward_samples = get_sampled_loss_function(
L_goodfellow_saddle, np.array([-0.5, -2.]), step_size=0.1,num_samples=1000)
projections = get_gradient_projection(np.array([forward_samples, backward_samples]).T)
plt.hist(projections, bins=50)
plt.xlabel('Gradient value')
plt.ylabel('Count')
plt.subplot(212)
projections = get_curvature_projection(np.array([forward_samples, backward_samples]).T)
plt.hist(projections, bins=50)
plt.xlabel('Curvature value')
plt.ylabel('Count')
plt.tight_layout()
plt.savefig('demo_spectra_joined.pdf')
#@title Load Fashion MNIST.
(X_train, Y_train), _ = tf.keras.datasets.fashion_mnist.load_data()
dataset_size = len(X_train)
output_size = 10
# Convert the array to float32 and normalize by 255
# Add a dim to represent the RGB index for CNNs
X_train = np.expand_dims((X_train.astype(np.float32) / 255.0), -1)
image_size = X_train.shape[1:]
Y_train = tf.keras.utils.to_categorical(Y_train, output_size).astype(np.float32)
#@title Create a simple Network.
with tf.device('gpu:0'):
model = tf.keras.Sequential(
[
tf.keras.layers.Flatten(input_shape=image_size),
tf.keras.layers.Dense(output_size, activation=tf.nn.softmax)
]
)
learning_rate = tf.Variable(0.1, trainable=False)
optimizer = tf.train.MomentumOptimizer(
learning_rate=learning_rate,
momentum=0.9)
get_decayed_learning_rate = tf.train.polynomial_decay(
learning_rate,
tf.train.get_or_create_global_step(),
5000000,
learning_rate.numpy() * 0.001)
# model.call = tfe.defun(model.call)
# We will now compile and print out a summary of our model
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model.summary()
model_copy = tf.keras.models.clone_model(model)
model_copy.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
def L_fn_mnist(weights):
# Closure that allows us to get information about the loss function.
set_flat_params(model_copy, weights)
loss, _ = model_copy.evaluate(
full_dataset.make_one_shot_iterator(),
steps=1,
verbose=0)
return loss
model.get_weights()
# Save the initializer.
initializer_weights = get_flat_params(model.variables)
start = time.time()
forward_samples, backward_samples = get_sampled_loss_function(
L_fn_mnist,
initializer_weights,
step_size=1.0,
num_samples=200,
x0_samples=10)
# Free Doubling of points..
plt.scatter(
np.concatenate([forward_samples, backward_samples]),
np.concatenate([backward_samples, forward_samples]), s=2, marker='x')
print('total time: {}'.format(time.time() - start))
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.title(r'Fashion MNIST at initializer')
plt.tight_layout()
Create some callbacks to allow us to print learning rate and other things.
tf.train.get_or_create_global_step().assign(0)
BATCH_SIZE=60000
FULL_DATASET_SIZE = X_train.shape[0]
# Create the dataset
original_dataset = tf.data.Dataset.from_tensor_slices((X_train, Y_train))
dataset = original_dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(1)
dataset = dataset.repeat()
full_dataset = original_dataset.batch(FULL_DATASET_SIZE)
def lr_decay_callback(*args):
learning_rate.assign(get_decayed_learning_rate())
tf.train.get_or_create_global_step().assign_add(1)
def lr_print_callback(epoch, logs):
step = tf.train.get_or_create_global_step().numpy()
if step % 50 == 0 or step == 0:
print(
'Step {}, Learning rate: {}, Metrics: {}'.format(
step, learning_rate.numpy(),logs))
learning_rate_decay_callback = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lr_decay_callback)
learning_rate_print_callback = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lr_print_callback)
callbacks = [learning_rate_decay_callback, learning_rate_print_callback]
# Train!
EPOCHS=15000
history = model.fit(
dataset, epochs=EPOCHS, steps_per_epoch=FULL_DATASET_SIZE // BATCH_SIZE,
callbacks=callbacks, verbose=0)
np.save('weights.npy', model.get_weights())
from google.colab import files
files.download('weights.npy') # NTS: save to google drive for week Oct 22
final_weights = get_flat_params(model.variables)
np.save('final_weights_flat.npy', final_weights)
files.download('final_weights_flat.npy') # NTS: save to google drive for week Oct 22
start = time.time()
forward_samples, backward_samples = get_sampled_loss_function(
L_fn_mnist, final_weights, step_size=1.0, num_samples=200, x0_samples=10)
# Free Doubling of points..
plt.scatter(
np.concatenate([forward_samples, backward_samples]),
np.concatenate([backward_samples, forward_samples]), s=2, marker='x')
print('total time: {}'.format(time.time() - start))
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.title(r'Fashion MNIST at final point')
plt.tight_layout()
start = time.time()
forward_samples, backward_samples = get_sampled_loss_function(
L_fn_mnist, final_weights, step_size=0.25, num_samples=200, x0_samples=10)
Free Doubling of points..
plt.scatter(
np.concatenate([forward_samples, backward_samples]),
np.concatenate([backward_samples, forward_samples]), s=2, marker='x')
print('total time: {}'.format(time.time() - start))
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.title(r'Fashion MNIST at final point, smaller alpha')
plt.tight_layout()
def index_of_percentile(data, percentile, data_index=0):
Gets the index of the percentile in the data
Args:
data: A np.ndarray of shape (BATCH, ...)
percentile: The percenctile of the data you want.
data_index: An integer representing the index of `data` that you want to
slice.
Returns:
The index closest to the percentile of the data. When accessing data[index]
we retrieve the data at the `percentile`-th percentile.
percentile_value = np.percentile(
data[:, data_index],
percentile,
interpolation='nearest'
)
data_shifted = np.abs(data[:, data_index] - percentile_value)
return np.argmin(data_shifted)
def get_curvature_of_most_improvement_direction(
data_centered, percentile=90, data_index=0, curvature_data=None):
Get the curvature value for the direction which gives the most improvement.
Args:
data_centered: np.ndarray of containing the centered version of the data.
percentile: An integer value of the percentile of the data.
data_index: See `index_of_percentile`.
curvature_data: Precomputed curvature data.
Returns:
Returns the curvature value that corresponds to the relative change at
percentile `percentile`.
# Get the index of the data point at the percentile'd data.
closest_idx = index_of_percentile(data_centered, percentile, data_index)
if curvature_data is None:
# No precomputed curvature data.
# So project the centered data and get the curvature.
return scalar_project(
data_centered[closest_idx, :], CURVATURE_AX)
else:
# Curvature data was precomputed so just returned the curvature value
# corresponding to the index that is closest to the `percentile`th
# percentile.
return curvature_data[closest_idx]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualizing objective functions by interpolating in randomly drawn directions
Step3: Background
Step7: This function has a saddle point at $(0,0)$ and a range of local optima along $x_1 = \frac{1}{x_2}$ as shown in the following figure.
Step8: A first step
Step9: A Linear interpolation would be correct in pointing us to the fact that there is a local optima, but would mislead us into thinking that there was no path from the left optima to the right optima. In fact, if we plot the loss function, we see that this slice actually goes through a region of high loss before making it into another region with a local optima.
Step13: Proposed Approach
Step18: We now show the scatter plots from our proposed technique.
Step19: We now visualize the point given from the first section.
Step20: Our technique correctly recovers that this point is a flat local optimum.
Step24: These help us distinguish between optima which have flat regions around them as well as saddle points. Note that the flatness is defined because of the points around 0. We contrast this with a quadratic function with one local minima has no points around zero
Step25: Visualizing Objective Functions for Fashion MNIST
Step27: Around the initializer, the loss function is linear.
Step28: footnote
Step31: Tracking negative curvature
|
13,383
|
<ASSISTANT_TASK:>
Python Code:
send(IP(dst="1.2.3.4")/TCP(dport=502, options=[("MSS", 0)]))
ans = sr([IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_RR())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_Traceroute())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8))/ICMP(seq=RandShort())], verbose=False, timeout=3)[0]
ans.make_table(lambda x, y: (", ".join(z.summary() for z in x[IP].options) or '-', x[IP].ttl, y.sprintf("%IP.src% %ICMP.type%")))
from scapy.all import *
packet = IP()/TCP()
Ether()/packet
>>> ls(IP, verbose=True)
version : BitField (4 bits) = (4)
ihl : BitField (4 bits) = (None)
tos : XByteField = (0)
len : ShortField = (None)
id : ShortField = (1)
flags : FlagsField (3 bits) = (0)
MF, DF, evil
frag : BitField (13 bits) = (0)
ttl : ByteField = (64)
proto : ByteEnumField = (0)
chksum : XShortField = (None)
src : SourceIPField (Emph) = (None)
dst : DestIPField (Emph) = (None)
options : PacketListField = ([])
p = Ether()/IP(dst="www.secdev.org")/TCP()
p.summary()
print(p.dst) # first layer that has an src field, here Ether
print(p[IP].src) # explicitly access the src field of the IP layer
# sprintf() is a useful method to display fields
print(p.sprintf("%Ether.src% > %Ether.dst%\n%IP.src% > %IP.dst%"))
print(p.sprintf("%TCP.flags% %TCP.dport%"))
[p for p in IP(ttl=(1,5))/ICMP()]
sr1(IP(dst="8.8.8.8")/UDP()/DNS(qd=DNSQR()))
p[DNS].an
r, u = srp(Ether()/IP(dst="8.8.8.8", ttl=(5,10))/UDP()/DNS(rd=1, qd=DNSQR(qname="www.example.com")))
r, u
# Access the first tuple
print(r[0][0].summary()) # the packet sent
print(r[0][1].summary()) # the answer received
# Access the ICMP layer. Scapy received a time-exceeded error message
r[0][1][ICMP]
wrpcap("scapy.pcap", r)
pcap_p = rdpcap("scapy.pcap")
pcap_p[0]
s = sniff(count=2)
s
sniff(count=2, prn=lambda p: p.summary())
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # create an UDP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/UDP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNS
# Send the DNS query
ssck.sr1(DNS(rd=1, qd=DNSQR(qname="www.example.com")))
ans, unans = srloop(IP(dst=["8.8.8.8", "8.8.4.4"])/ICMP(), inter=.1, timeout=.1, count=100, verbose=False)
%matplotlib inline
ans.multiplot(lambda x, y: (y[IP].src, (y.time, y[IP].id)), plot_xy=True)
pkt = IP() / UDP() / DNS(qd=DNSQR())
print(repr(raw(pkt)))
print(pkt.summary())
hexdump(pkt)
pkt.show()
pkt.canvas_dump()
ans, unans = traceroute('www.secdev.org', maxttl=15)
ans.world_trace()
ans = sr(IP(dst=["scanme.nmap.org", "nmap.org"])/TCP(dport=[22, 80, 443, 31337]), timeout=3, verbose=False)[0]
ans.extend(sr(IP(dst=["scanme.nmap.org", "nmap.org"])/UDP(dport=53)/DNS(qd=DNSQR()), timeout=3, verbose=False)[0])
ans.make_table(lambda x, y: (x[IP].dst, x.sprintf('%IP.proto%/{TCP:%r,TCP.dport%}{UDP:%r,UDP.dport%}'), y.sprintf('{TCP:%TCP.flags%}{ICMP:%ICMP.type%}')))
class DNSTCP(Packet):
name = "DNS over TCP"
fields_desc = [ FieldLenField("len", None, fmt="!H", length_of="dns"),
PacketLenField("dns", 0, DNS, length_from=lambda p: p.len)]
# This method tells Scapy that the next packet must be decoded with DNSTCP
def guess_payload_class(self, payload):
return DNSTCP
# Build then decode a DNS message over TCP
DNSTCP(raw(DNSTCP(dns=DNS())))
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # create an TCP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/TCP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNSTCP
# Send the DNS query
ssck.sr1(DNSTCP(dns=DNS(rd=1, qd=DNSQR(qname="www.example.com"))))
from scapy.all import *
import argparse
parser = argparse.ArgumentParser(description="A simple ping6")
parser.add_argument("ipv6_address", help="An IPv6 address")
args = parser.parse_args()
print(sr1(IPv6(dst=args.ipv6_address)/ICMPv6EchoRequest(), verbose=0).summary())
# Specify the Wi-Fi monitor interface
#conf.iface = "mon0" # uncomment to test
# Create an answering machine
class ProbeRequest_am(AnsweringMachine):
function_name = "pram"
# The fake mac of the fake access point
mac = "00:11:22:33:44:55"
def is_request(self, pkt):
return Dot11ProbeReq in pkt
def make_reply(self, req):
rep = RadioTap()
# Note: depending on your Wi-Fi card, you might need a different header than RadioTap()
rep /= Dot11(addr1=req.addr2, addr2=self.mac, addr3=self.mac, ID=RandShort(), SC=RandShort())
rep /= Dot11ProbeResp(cap="ESS", timestamp=time.time())
rep /= Dot11Elt(ID="SSID",info="Scapy !")
rep /= Dot11Elt(ID="Rates",info=b'\x82\x84\x0b\x16\x96')
rep /= Dot11Elt(ID="DSset",info=chr(10))
OK,return rep
# Start the answering machine
#ProbeRequest_am()() # uncomment to test
from scapy.all import *
import nfqueue, socket
def scapy_cb(i, payload):
s = payload.get_data() # get and parse the packet
p = IP(s)
# Check if the packet is an ICMP Echo Request to 8.8.8.8
if p.dst == "8.8.8.8" and ICMP in p:
# Delete checksums to force Scapy to compute them
del(p[IP].chksum, p[ICMP].chksum)
# Set the ICMP sequence number to 0
p[ICMP].seq = 0
# Let the modified packet go through
ret = payload.set_verdict_modified(nfqueue.NF_ACCEPT, raw(p), len(p))
else:
# Accept all packets
payload.set_verdict(nfqueue.NF_ACCEPT)
# Get an NFQUEUE handler
q = nfqueue.queue()
# Set the function that will be call on each received packet
q.set_callback(scapy_cb)
# Open the queue & start parsing packes
q.fast_open(2807, socket.AF_INET)
q.try_run()
class TCPScanner(Automaton):
@ATMT.state(initial=1)
def BEGIN(self):
pass
@ATMT.state()
def SYN(self):
print("-> SYN")
@ATMT.state()
def SYN_ACK(self):
print("<- SYN/ACK")
raise self.END()
@ATMT.state()
def RST(self):
print("<- RST")
raise self.END()
@ATMT.state()
def ERROR(self):
print("!! ERROR")
raise self.END()
@ATMT.state(final=1)
def END(self):
pass
@ATMT.condition(BEGIN)
def condition_BEGIN(self):
raise self.SYN()
@ATMT.condition(SYN)
def condition_SYN(self):
if random.randint(0, 1):
raise self.SYN_ACK()
else:
raise self.RST()
@ATMT.timeout(SYN, 1)
def timeout_SYN(self):
raise self.ERROR()
TCPScanner().run()
TCPScanner().run()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2_ Adanced firewalking using IP options is sometimes useful to perform network enumeration. Here is more complicate one-liner
Step2: Now that, we've got your attention, let's start the tutorial !
Step3: First steps
Step4: This last output displays the packet summary. Here, Scapy automatically filled the Ethernet type as well as the IP protocol field.
Step5: Let's create a new packet to a specific IP destination. With Scapy, each protocol field can be specified. As shown in the ls() output, the interesting field is dst.
Step6: There are not many differences with the previous example. However, Scapy used the specific destination to perform some magic tricks !
Step7: Scapy uses default values that work most of the time. For example, TCP() is a SYN segment to port 80.
Step8: Moreover, Scapy has implicit packets. For example, they are useful to make the TTL field value vary from 1 to 5 to mimic traceroute.
Step9: Sending and receiving
Step10: Another alternative is the sr() function. Like srp1(), the sr1() function can be used for layer 2 packets.
Step11: sr() sent a list of packets, and returns two variables, here r and u, where
Step12: With Scapy, list of packets, such as r or u, can be easily written to, or read from PCAP files.
Step13: Sniffing the network is a straightforward as sending and receiving packets. The sniff() function returns a list of Scapy packets, that can be manipulated as previously described.
Step14: sniff() has many arguments. The prn one accepts a function name that will be called on received packets. Using the lambda keyword, Scapy could be used to mimic the tshark command behavior.
Step15: Alternatively, Scapy can use OS sockets to send and receive packets. The following example assigns an UDP socket to a Scapy StreamSocket, which is then used to query www.example.com IPv4 address.
Step16: Visualization
Step17: Then we can use the results to plot the IP id values.
Step18: The raw() constructor can be used to "build" the packet's bytes as they would be sent on the wire.
Step19: Since some people cannot read this representation, Scapy can
Step20: "hexdump" the packet's bytes
Step21: dump the packet, layer by layer, with the values for each field
Step22: render a pretty and handy dissection of the packet
Step23: Scapy has a traceroute() function, which basically runs a sr(IP(ttl=(1..30)) and creates a TracerouteResult object, which is a specific subclass of SndRcvList().
Step24: The result can be plotted with .world_trace() (this requires GeoIP module and data, from MaxMind)
Step25: The PacketList.make_table() function can be very helpful. Here is a simple "port scanner"
Step26: Implementing a new protocol
Step27: This new packet definition can be direcly used to build a DNS message over TCP.
Step28: Modifying the previous StreamSocket example to use TCP allows to use the new DNSCTP layer easily.
Step29: Scapy as a module
Step30: Answering machines
Step31: Cheap Man-in-the-middle with NFQUEUE
Step32: Automaton
|
13,384
|
<ASSISTANT_TASK:>
Python Code:
data = pd.read_csv("./formatted_data.csv",header=0, index_col=False)
data.head()
drop_cols = ['Sensor_'+x+'1' for x in map(chr,range(65,81))]
drop_cols.append('Batch_No')
data = data.drop(drop_cols, axis=1)
data.head()
data.describe()
from sklearn import preprocessing
target = data['Label']
data = data.drop('Label', axis=1)
min_max_scaler = preprocessing.MinMaxScaler(feature_range=(-1,1))
data_scaled = min_max_scaler.fit_transform(data)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data_scaled, target, test_size=0.25, random_state=0)
from sklearn import tree
dt_classifier = tree.DecisionTreeClassifier()
dt_classifier = dt_classifier.fit(X_train, y_train)
y_pred = dt_classifier.predict(X_test)
print "Accuracy: %0.2f" %dt_classifier.score(X_test, y_test)
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import itertools
%matplotlib inline
def plot_confusion_matrix(cm, classes, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j], horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
return
#plt.figure()
plot_confusion_matrix(confusion_matrix(y_test, y_pred), classes=["Ethanol", "Ethylene", "Ammonia", "Acetaldehyde", "Acetone", "Toluene"],
title='Confusion matrix')
from sklearn.metrics import classification_report
print classification_report(y_test, y_pred, target_names=["Ethanol", "Ethylene", "Ammonia", "Acetaldehyde", "Acetone", "Toluene"])
from sklearn.model_selection import cross_val_score
def cv_score(clf,k):
f1_scores = cross_val_score(clf, data_scaled, target, cv=k, scoring='f1_macro')
print f1_scores
print("F1 score: %0.2f (+/- %0.2f)" % (f1_scores.mean(), f1_scores.std() * 2))
return
cv_score(dt_classifier,10)
from sklearn.ensemble import RandomForestClassifier
rf_classifier = RandomForestClassifier(n_estimators=5)
#rf_classifier = rf_classifier.fit(X_train, y_train)
#y_pred_rf = classifier.predict(X_test)
cv_score(rf_classifier,10)
#plot_confusion_matrix(confusion_matrix(y_test, y_pred_rf), classes=["Ethanol", "Ethylene", "Ammonia", "Acetaldehyde", "Acetone", "Toluene"],
# title='Confusion matrix')
#print classification_report(y_test, y_pred_rf, target_names=["Ethanol", "Ethylene", "Ammonia", "Acetaldehyde", "Acetone", "Toluene"])
from sklearn import svm
svm_classifier = svm.SVC(C=1.0, kernel='rbf', gamma='auto', cache_size=9000, decision_function_shape = 'ovr')
cv_score(svm_classifier,10)
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedShuffleSplit
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
cv = StratifiedShuffleSplit(n_splits=5, test_size=0.2, random_state=42)
grid = GridSearchCV(svm.SVC(), param_grid=param_grid, cv=cv)
grid.fit(data, target)
print("The best parameters are %s with a score of %0.2f"
% (grid.best_params_, grid.best_score_))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For each sensor the second column is the normalized form of the first column, so to avoid duplicates we drop the first column (A1,B1...P1) for each sensor.
Step2: Summarize the data to better understand its distribution and decide on the appropriate preprocessing steps
Step3: Data preprocessing
Step4: Split the dataset into a training set and test set.
Step5: Binomial v/s Multinomial classification
Step6: In the above code snippet we fit a decision tree to our dataset, used it to make predictions on our test set and we calculated its accuracy as the number of correct predictions from all predictions made. Accuracy is a starting point but is not a sufficient measure for evaluating a model's predictive power due to a phenomena known as Accuracy Paradox. It yields misleading results if the data set is unbalanced.
Step7: We use one of the methods to compute Precision, Recall and F-1 score for each class.
Step8: Cross-Validation
Step9: By default, the score computed at each CV iteration is the score method of the estimator.
Step10: Ensemble learning
Step11: Uncomment the following snippets of code to view the confusion matrix and classification report for the Random forest model.
Step12: Bias - Variance Trade off
Step13: Hyperparameter optimization
|
13,385
|
<ASSISTANT_TASK:>
Python Code:
import espressomd
import espressomd.electrostatics
import espressomd.observables
import espressomd.accumulators
import espressomd.math
espressomd.assert_features(['WCA', 'ELECTROSTATICS'])
import numpy as np
import scipy.optimize
%matplotlib inline
import matplotlib.pyplot as plt
np.random.seed(41)
plt.rcParams.update({'font.size': 22})
# system parameters
ROD_LENGTH = 50
BJERRUM_LENGTH = 1.0
# we assume a unit system where the elementary charge and the thermal energy are both 1
system = espressomd.System(box_l=3 * [ROD_LENGTH])
KT = 1.
Q_E = 1.
system.time_step = 0.01
system.cell_system.skin = 0.4
# interaction parameters
WCA_EPSILON = 1.0
ION_DIAMETER = 1.0
ROD_RADIUS = 1.0
# particle types
ROD_TYPE = 1
COUNTERION_TYPE = 2
COUNTERION_VALENCY = 1
ROD_CHARGE_DENS = 2
# number of beads that make up the rod
N_rod_beads = int(ROD_LENGTH / ROD_RADIUS)
setup_rod_and_counterions(system, COUNTERION_VALENCY, COUNTERION_TYPE,
ROD_CHARGE_DENS, N_rod_beads, ROD_TYPE)
# check that the particle setup was done correctly
assert abs(sum(system.part.all().q)) < 1e-10
assert np.all(system.part.select(type=ROD_TYPE).fix)
p3m_params = {'prefactor': KT * BJERRUM_LENGTH * Q_E**2,
'accuracy': 1e-3}
def remove_overlap(system, sd_params):
# Removes overlap by steepest descent until forces or energies converge
# Set up steepest descent integration
system.integrator.set_steepest_descent(f_max=0,
gamma=sd_params['damping'],
max_displacement=sd_params['max_displacement'])
# Initialize integrator to obtain initial forces
system.integrator.run(0)
maxforce = np.max(np.linalg.norm(system.part.all().f, axis=1))
energy = system.analysis.energy()['total']
i = 0
while i < sd_params['max_steps'] // sd_params['emstep']:
prev_maxforce = maxforce
prev_energy = energy
system.integrator.run(sd_params['emstep'])
maxforce = np.max(np.linalg.norm(system.part.all().f, axis=1))
relforce = np.abs((maxforce - prev_maxforce) / prev_maxforce)
energy = system.analysis.energy()['total']
relener = np.abs((energy - prev_energy) / prev_energy)
if i > 1 and (i + 1) % 4 == 0:
print(f"minimization step: {(i+1)*sd_params['emstep']:4.0f}"
f" max. rel. force change:{relforce:+3.3e}"
f" rel. energy change:{relener:+3.3e}")
if relforce < sd_params['f_tol'] or relener < sd_params['e_tol']:
break
i += 1
system.integrator.set_vv()
STEEPEST_DESCENT_PARAMS = {'f_tol': 1e-2,
'e_tol': 1e-5,
'damping': 30,
'max_steps': 10000,
'max_displacement': 0.01,
'emstep': 10}
remove_overlap(system, STEEPEST_DESCENT_PARAMS)
LANGEVIN_PARAMS = {'kT': KT,
'gamma': 0.5,
'seed': 42}
system.thermostat.set_langevin(**LANGEVIN_PARAMS)
energies = []
STEPS_PER_SAMPLE_FIRST_RUN = 10
N_SAMPLES_FIRST_RUN = 1000
for i in range(N_SAMPLES_FIRST_RUN):
system.integrator.run(STEPS_PER_SAMPLE_FIRST_RUN)
energies.append(system.analysis.energy()['total'])
# plot time in time_steps so we can judge the number of warmup steps
ts = np.arange(0, N_SAMPLES_FIRST_RUN) * STEPS_PER_SAMPLE_FIRST_RUN
plt.figure(figsize=(10, 7))
plt.plot(ts, energies)
plt.xlabel('time steps')
plt.ylabel('system total energy')
plt.show()
WARMUP_STEPS = 5000
STEPS_PER_SAMPLE = 100
r_min = ROD_RADIUS + ION_DIAMETER / 2.
r_max = system.box_l[0] / 2.
N_RADIAL_BINS = 200
radial_profile_accs, bin_edges = setup_profile_calculation(
system, STEPS_PER_SAMPLE, [COUNTERION_TYPE], r_min, N_RADIAL_BINS)
assert isinstance(
radial_profile_accs[COUNTERION_TYPE], espressomd.accumulators.MeanVarianceCalculator)
assert len(bin_edges) == N_RADIAL_BINS + 1
clear_system(system)
runs = [{'params': {'counterion_valency': 2, 'rod_charge_dens': 1},
'histogram': None},
{'params': {'counterion_valency': 1, 'rod_charge_dens': 2},
'histogram': None}
]
N_SAMPLES = 1500
def integrate_system(system, n_steps):
for i in range(100):
system.integrator.run(int(n_steps / 100))
print(f'progress={i}%', end='\r')
print('progress=100%')
# With the notion of P(r) the probability to find the charge up to r,
# we only use the right side of the bin edges for plotting
rs = bin_edges[1:, 0, 0, 0]
fig, ax = plt.subplots(figsize=(10, 7))
for run in runs:
hist = np.array(run['histogram'][:, 0, 0])
# The CylindricalDensityProfile normalizes the bin values by the bin size.
# We want the 'raw' distribution (number of ions within a radius)
# so we need to multiply by the radii
hist = hist * rs
cum_hist = np.cumsum(hist)
cum_hist /= cum_hist[-1]
manning_xi = run['params']['rod_charge_dens'] * BJERRUM_LENGTH / Q_E
ax.plot(rs, cum_hist, label=rf'$\xi ={manning_xi}, \nu = {run["params"]["counterion_valency"]}$')
ax.set_xscale('log')
ax.legend()
plt.xlabel('r')
plt.ylabel('P(r)')
plt.show()
def eq_to_solve_for_gamma(gamma, manning_parameter, rod_radius, max_radius):
# eq 7 - eq 6 from 10.1021/ma990897o
return gamma * np.log(max_radius / rod_radius) - np.arctan(1 / gamma) + np.arctan((1 - manning_parameter) / gamma)
def calc_manning_radius(gamma, max_radius):
# eq 7 from 10.1021/ma990897o
return max_radius * np.exp(-np.arctan(1. / gamma) / gamma)
def calc_PB_probability(r, manning_parameter, gamma, manning_radius):
# eq 8 and 9 from 10.1021/ma990897o
return 1. / manning_parameter + gamma / manning_parameter * np.tan(gamma * np.log(r / manning_radius))
rod_charge_density = runs[0]['params']['rod_charge_dens']
ion_valency = runs[0]['params']['counterion_valency']
manning_parameter_times_valency = BJERRUM_LENGTH * rod_charge_density * ion_valency
gamma = scipy.optimize.fsolve(eq_to_solve_for_gamma, 1, args=(
manning_parameter_times_valency, r_min, r_max))
manning_radius = calc_manning_radius(gamma, r_max)
PB_probability = calc_PB_probability(
rs, manning_parameter_times_valency, gamma, manning_radius)
ax.plot(rs, PB_probability, label=rf'PB $\xi \cdot \nu$ = {manning_parameter_times_valency}')
ax.legend()
ax.set_xscale('log')
fig
def add_salt(system, anion_params, cation_params):
N_anions = anion_params['number']
N_cations = cation_params['number']
anion_positions = np.random.random((N_anions, 3)) * system.box_l
cation_positions = np.random.random((N_cations, 3)) * system.box_l
anions = system.part.add(pos=anion_positions, type=[anion_params['type']] * N_anions,
q=[-anion_params['valency']] * N_anions)
cations = system.part.add(pos=cation_positions, type=[cation_params['type']] * N_cations,
q=[cation_params['valency']] * N_cations)
return anions, cations
ANION_PARAMS = {'type': 3,
'valency': 2,
'number': 150}
CATION_PARAMS = {'type': 4,
'valency': 2,
'number': 150}
ROD_LENGTH = 10
N_rod_beads = int(ROD_LENGTH / ROD_RADIUS)
ROD_CHARGE_DENS = 1
COUNTERION_VALENCY = 1
STEPS_PER_SAMPLE_SALT = 20
N_SAMPLES_SALT = 1500
N_RADIAL_BINS = 100
all_ion_types = [COUNTERION_TYPE, ANION_PARAMS['type'], CATION_PARAMS['type']]
# set interactions of salt with the rod and all ions
for salt_type in [ANION_PARAMS['type'], CATION_PARAMS['type']]:
system.non_bonded_inter[salt_type, ROD_TYPE].wca.set_params(
epsilon=WCA_EPSILON, sigma=ION_DIAMETER / 2. + ROD_RADIUS)
for ion_type in all_ion_types:
system.non_bonded_inter[salt_type, ion_type].wca.set_params(
epsilon=WCA_EPSILON, sigma=ION_DIAMETER)
clear_system(system)
system.box_l = 3 * [ROD_LENGTH]
counterions = setup_rod_and_counterions(
system, COUNTERION_VALENCY, COUNTERION_TYPE,
ROD_CHARGE_DENS, N_rod_beads, ROD_TYPE)
anions, cations = add_salt(system, ANION_PARAMS, CATION_PARAMS)
assert abs(sum(anions.q) + sum(cations.q)) < 1e-10
p3m.tune()
remove_overlap(system, STEEPEST_DESCENT_PARAMS)
system.thermostat.set_langevin(**LANGEVIN_PARAMS)
print('starting warmup')
integrate_system(system, WARMUP_STEPS)
radial_profile_accs, bin_edges = setup_profile_calculation(
system, STEPS_PER_SAMPLE_SALT, all_ion_types, r_min, N_RADIAL_BINS)
print('starting simulation')
integrate_system(system, N_SAMPLES_SALT * STEPS_PER_SAMPLE_SALT)
print('simulation done')
rs = bin_edges[1:, 0, 0, 0]
cum_hists = {}
for ion_type in all_ion_types:
hist = radial_profile_accs[ion_type].mean()
hist = hist[:, 0, 0] * rs
cum_hist = np.cumsum(hist)
cum_hist /= cum_hist[-1]
cum_hists[ion_type] = cum_hist
charge_hist /= charge_hist[-1]
fig2, ax2 = plt.subplots(figsize=(10, 7))
ax2.plot(rs, charge_hist)
ax2.set_xscale('linear')
plt.xlabel('r')
plt.ylabel('P(r)')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: System setup
Step2: We will build the charged rod from individual particles that are fixed in space. With this, we can use the particle-based electrostatics methods of ESPResSo. For analysis, we give the rod particles a different type than the counterions.
Step3: Exercise
Step4: Now we set up the electrostatics method to calculate the forces and energies from the longrange coulomb interaction. ESPResSo uses so-called <tt>actors</tt> for electrostatics, magnetostatics and hydrodynamics. This ensures that unphysical combinations of algorithms are avoided, for example simultaneous usage of two electrostatic interactions. Adding an actor to the system also activates the method and calls necessary initialization routines. Here, we define a P$^3$M object using the Bjerrum length and rms force error. This automatically starts a tuning function which tries to find optimal parameters for P$^3$M and prints them to the screen. For more details, see the Espresso documentation.
Step5: For the accuracy, ESPResSo estimates the relative error in the force calculation introduced by the approximations of $P^3M$. We choose a relatively poor accuracy (large value) for this tutorial to make it run faster. For your own production simulations you should reduce that number.
Step6: After the overlap is removed, we activate a thermostat to simulate the system at a given temperature.
Step7: First run and observable setup
Step8: Now we are ready to implement the observable calculation. As we are interested in the condensation of counterions on the rod, the physical quantity of interest is the density of charges $\rho(r)$ around the rod, where $r$ is the distance from the rod. We need many samples to calculate the density from histograms.
Step9: To run the simulation with different parameters, we need a way to reset the system and return it to an empty state before setting it up again.
Step10: Production run and analysis
Step11: For longer simulation runs it will be convenient to have a progress bar
Step12: Exercise
Step13: In the semilogarithmic plot we see an inflection point of the cumulative charge distribution which is the indicator for ion condensation. To compare to the meanfield approach of PB, we calculate the solution of the analytical expressions given in 10.1021/ma990897o
Step14: For multivalent counterions, the manning parameter $\xi$ has to be multiplied by the valency $\nu$. The result depends only on the product of rod_charge_dens and ion_valency, so we only need one curve
Step15: We see that overall agreement is quite good, but the deviations from the PB solution get stronger the more charged the ions are.
Step16: Exercise
|
13,386
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.figsize']=(20,5)
#No usamos ninguan columna como índice para los datos
Puertos=pd.read_csv('/Datos/Informacion_Estadistica_Mensual_de_las_Marinas_FOP_180216.csv')
#Se obtiene que que la tabla cuanta con 288 filas y 4 columnas.
Puertos.shape
#Se visualizan las primeras 5 filas
Puertos.head()
#Se podría no hacer esto, pero solo por explorar los últimos registros los visualizo
Puertos.tail()
#Se da una breve descripción del tipo de variables
Puertos.info()
#Se elige la columna Marina Fonatur y se eliminan los duplicados para conocer la cantidad de puertos registrados en los datos.
Puertos['Marina Fonatur'].drop_duplicates()
#Se agrupan, se calcula la mediana y se ordenan por el porcentaje de ocupación de muerlles y boyas. Las primeras filas
#de la tabla son los que registran mayor valor.
Puertos.groupby('Marina Fonatur').median().sort_values(by='Porcentaje de ocupacion muelles y boyas',ascending=False)
#Tabla pivor para ver el comportamiento de todos los puertos
Puertos.pivot_table(values='Porcentaje de ocupacion muelles y boyas',columns='Marina Fonatur',index='Fecha')
#Se grafica la tabla pivot
Puertos.pivot_table(values='Porcentaje de ocupacion muelles y boyas',columns='Marina Fonatur',index='Fecha').plot()
#Se contruye la tabla y luego solo se eligen los 3 puertos a revisar.
Tabla1=Puertos.pivot_table(values='Porcentaje de ocupacion muelles y boyas',columns='Marina Fonatur',index='Fecha')
Tabla1[['Mazatlan','La Paz','Guaymas']]
#En la línea morada se observa que al final, en el último punto falta un valor, esto es por el NaN
Tabla1=Tabla1[['Mazatlan','La Paz','Guaymas']]
Tabla1.plot()
#Sustituimos el valor del Nan, si volvemos a graficar ya se muestra la línea morada con un dato en la última fecha.b
Tabla1=Tabla1.fillna(Tabla1.mean())
Tabla1.plot()
# Para explorar el comportamiento de Mazatlan, tanto de la ocupación en muelles y boyas, como de la marina seca eligo desde
# el DataFrame los datos solo relacionados con ese puerto. Y construyo su gráfica.
Mazatlan=Puertos[Puertos['Marina Fonatur'].ix[:]=='Mazatlan']
Mazatlan.plot(x='Fecha')
#La correlación entre los dos porcentajes es de .312287, bastante bajo.
Mazatlan.corr()
#Lo anterior gráficamente se puede visualizar con una gráfica que cuente con los histogramas y el scatterplot.
sns.jointplot(x="Porcentaje de ocupacion muelles y boyas",y="Porcentaje de ocupacion marina seca",data=Mazatlan,kind="reg")
#Matriz de correlaciones de Mazatla, la Paz y Guaymas.
Tabla1.corr()
#Por último se puede visualizar la relación de las medianas de los porcentajes para cada uno de los puertos.
Puertos.groupby('Marina Fonatur').median().plot(kind='bar')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Se revisan las cantidades de filas y columanas, se visulizan los primeros y los últimos registros.
Step2: Lo primero que trato de explorar es la cantidad de puertos considerados, para poder elegir entre ellos los 3 que tienen la mediana mayor de los porcentajes de ocupación.
Step3: Se observa que se tienen solo 8 puertos, ahora agrupo respecto a los puertos y considero la mediana de los porcentajes de ocupación para elegir los que muestran un valor mayor y posteriormente analizarlos.
Step4: Los puertos analizar o explorar, son Mazatlan, La Paz y Guaymas. Para conocer como se comportan el porcentaje por fecha para todos los puertos se contruye una tabla pivot.
Step5: Se observa que la última fial tiene un NaN para la Paz, así que en la revisión correspondiente lo tomaré en cuanta. La tabla anterior puede graficarse directamente, solo se agrega plot() al final de la construcción de la tabla.
Step6: En la gráfica se observa que el porcentaje de ocuapción en muelles y boyas desde Febrero 2015 el puerto de Mazatlán( línea azul) es el más usado, siempre arriba del 60% de ocupación. Por otro lado, muestra que la Paz(línea morada) desde la misma fecha muestra una "tendencia" a la baja. Luego viendo el comportamiento de Guaymas(línea roja), muestra de manera "global" un crecimiento casi contante, con ligeras bajas y altas, salvo por los ultimos meses del 2015 el crecimiento era "constante".
Step7: La gráfica muestra que después de Febrero 2015 se registra una "alza", tanto en la ocupación de marina seca, como en muelles y boyas. Pero también se observa como entre Diciembre 2014 y Enero 2015 hay una baja muy considerable en el porcentaje de ocupación en general.
Step8: Se muestra poca relación entre las dos tasas de porcentaje. No hago mayor exploración y se podría explorar si se comporta igua para cada uno de los puertos analizados. También se puede revisar la relación de los porcentajes de ocupación en muelles y boyas para los 3 puertos que se revisaron. Se hace esto desde los datos de la tabla 1.
Step9: De la tabla anterior se puede revisar la relación entre Guaymas y Mazatlan, la cual es considerable para revisar estadísticamente con más cuidado. Considerando las ubicaciones geográficas de los 3 puertos, es curioso que la Paz muestra correlación negativa y con Guaymas positiva, quizás relaciones comerciales o de cercanía que salen fuera de este breve análisis.
|
13,387
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
np.set_printoptions(precision=4, suppress=True)
sns.set_context('notebook')
%matplotlib inline
theta = [[1., 2], [.5, 2.5], [.25, 2.75]]
def f(x, a, b):
if x < a or x > b:
return 0
else:
return 1/(b-a)
y = np.linspace(0, 3, 1e3)
for t in theta:
a, b = t[0], t[1]
ff = [f(x, a, b) for x in y]
plt.plot(y, ff, lw = 4)
plt.show()
n = 100
a, b = 1., 2.
# simulate data
y = np.random.uniform(a, b, n)
# plot data
plt.hist(y, bins=10, normed=True, lw=0)
plt.xlabel('$y_i$')
plt.ylabel('$\hat{f}$')
plt.show()
# sample size
n = int(1e2)
# true parameter value
a, b = 1., 2.
# simulate data
y = np.sort(np.random.uniform(a, b, n))
# MLE estimator
a_hat, b_hat = np.min(y), np.max(y)
print('Estimates are: a = ', a_hat, ' b = ', b_hat)
# function of exponential density
ff = lambda y, a, b: [f(x, a, b) for x in y]
# plot results
plt.hist(y, bins=10, normed=True, alpha=.2, lw=0)
plt.plot(y, ff(y, a, b), c='black', lw=4)
plt.plot(y, ff(y, a_hat, b_hat), c='red', lw=4)
plt.xlabel(r'$y_i$')
plt.ylabel(r'$\hat{f}$')
plt.legend(('True', 'Fitted','Histogram'))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Draw uniform density
Step2: Simulate data and draw histogram
Step3: Simulate data and estimate model parameter by MLE
|
13,388
|
<ASSISTANT_TASK:>
Python Code:
# Define our function (Python)
def duff_osc_ss(x, params):
omega = params['omega']
t = params['cur_time']
xd = np.array([[x[1]],
[-x[0] - 0.1 * x[0]**3 - 0.1 * x[1] + 1 * sin(omega * t)]])
return xd
# Arguments are name of derivative function, number of states, driving frequency,
# form of the equation, and number of harmonics
t, x, e, amps, phases = ms.hb_time(duff_osc_ss, num_variables=2, omega=.1,
eqform='first_order', num_harmonics=5)
print('Displacement amplitude is ', amps[0])
print('Velocity amplitude is ', amps[1])
def pltcont():
time, xc = ms.time_history(t, x)
disp_plot, _ = plt.plot(time, xc.T[:, 0], t,
x.T[:, 0], '*b', label='Displacement')
vel_plot, _ = plt.plot(time, xc.T[:, 1], 'r',
t, x.T[:, 1], '*r', label='Velocity')
plt.legend(handles=[disp_plot, vel_plot])
plt.xlabel('Time (sec)')
plt.title('Response of Duffing Oscillator at 0.0159 rad/sec')
plt.ylabel('Response')
plt.legend
plt.grid(True)
fig=plt.figure()
ax=fig.add_subplot(111)
time, xc = ms.time_history(t, x)
disp_plot, _ = ax.plot(time, xc.T[:, 0], t,
x.T[:, 0], '*b', label='Displacement')
vel_plot, _ = ax.plot(time, xc.T[:, 1], 'r',
t, x.T[:, 1], '*r', label='Velocity')
ax.legend(handles=[disp_plot, vel_plot])
ax.set_xlabel('Time (sec)')
ax.set_title('Response of Duffing Oscillator at 0.0159 rad/sec')
ax.set_ylabel('Response')
ax.legend
ax.grid(True)
pltcont()# abbreviated plotting function
time, xc = ms.time_history(t, x)
disp_plot, _ = plt.plot(time, xc.T[:, 0], t,
x.T[:, 0], '*b', label='Displacement')
vel_plot, _ = plt.plot(time, xc.T[:, 1], 'r',
t, x.T[:, 1], '*r', label='Velocity')
plt.legend(handles=[disp_plot, vel_plot])
plt.xlabel('Time (sec)')
plt.title('Response of Duffing Oscillator at 0.0159 rad/sec')
plt.ylabel('Response')
plt.legend
plt.grid(True)
omega = np.arange(0, 3, 1 / 200) + 1 / 200
amp = sp.zeros_like(omega)
amp[:] = np.nan
t, x, e, amps, phases = ms.hb_time(duff_osc_ss, num_variables=2,
omega=1 / 200, eqform='first_order', num_harmonics=1)
for i, freq in enumerate(omega):
# Here we try to obtain solutions, but if they don't work,
# we ignore them by inserting `np.nan` values.
x = x - sp.average(x)
try:
t, x, e, amps, phases =
ms.hb_time(duff_osc_ss, x0=x,
omega=freq, eqform='first_order', num_harmonics=1)
amp[i] = amps[0]
except:
amp[i] = np.nan
if np.isnan(amp[i]):
break
plt.plot(omega, amp)
omegal = np.arange(3, .03, -1 / 200) + 1 / 200
ampl = sp.zeros_like(omegal)
ampl[:] = np.nan
t, x, e, amps, phases = ms.hb_time(duff_osc_ss, num_variables=2,
omega=3, eqform='first_order', num_harmonics=1)
for i, freq in enumerate(omegal):
# Here we try to obtain solutions, but if they don't work,
# we ignore them by inserting `np.nan` values.
x = x - np.average(x)
try:
t, x, e, amps, phases = ms.hb_time(duff_osc_ss, x0=x,
omega=freq, eqform='first_order', num_harmonics=1)
ampl[i] = amps[0]
except:
ampl[i] = np.nan
if np.isnan(ampl[i]):
break
plt.plot(omega,amp, label='Up sweep')
plt.plot(omegal,ampl, label='Down sweep')
plt.legend()
plt.title('Amplitude versus frequency for Duffing Oscillator')
plt.xlabel('Driving frequency $\\omega$')
plt.ylabel('Amplitude')
plt.grid()
def two_dof_demo(x, params):
omega = params['omega']
t = params['cur_time']
force_amplitude = params['force_amplitude']
alpha = params['alpha']
# The following could call an external code to obtain the state derivatives
xd = np.array([[x[1]],
[-2 * x[0] - alpha * x[0]**3 + x[2]],
[x[3]],
[-2 * x[2] + x[0]]] + force_amplitude * np.sin(omega * t))
return xd
parameters = {'force_amplitude': 0.2}
parameters['alpha'] = 0.4
t, x, e, amps, phases = ms.hb_time(two_dof_demo, num_variables=4,
omega=1.2, eqform='first_order', params=parameters)
amps
alpha = np.linspace(-1, .45, 2000)
amp = np.zeros_like(alpha)
for i, alphai in enumerate(alpha):
parameters['alpha'] = alphai
t, x, e, amps, phases = ms.hb_time(two_dof_demo, num_variables=4, omega=1.2,
eqform='first_order', params=parameters)
amp[i] = amps[0]
plt.plot(alpha,amp)
plt.title('Amplitude of $x_1$ versus $\\alpha$')
plt.ylabel('Amplitude of $x_1$')
plt.xlabel('$\\alpha$')
plt.grid()
def two_dof_coulomb(x, params):
omega = params['omega']
t = params['cur_time']
force_amplitude = params['force_amplitude']
mu = params['mu']
# The following could call an external code to obtain the state derivatives
xd = np.array([[x[1]],
[-2 * x[0] - mu * np.abs(x[1]) + x[2]],
[x[3]],
[-2 * x[2] + x[0]]] + force_amplitude * np.sin(omega * t))
return xd
parameters = {'force_amplitude': 0.2}
parameters['mu'] = 0.1
t, x, e, amps, phases = ms.hb_time(two_dof_coulomb, num_variables=4,
omega=1.2, eqform='first_order', params=parameters)
amps
mu = np.linspace(0, 1.0, 200)
amp = np.zeros_like(mu)
for i, mui in enumerate(mu):
parameters['mu'] = mui
t, x, e, amps, phases = ms.hb_time(two_dof_coulomb, num_variables=4, omega=1.2,
eqform='first_order', num_harmonics=3, params=parameters)
amp[i] = amps[0]
plt.plot(mu,amp)
plt.title('Amplitude of $x_1$ versus $\\mu$')
plt.ylabel('Amplitude of $x_1$')
plt.xlabel('$\\mu$')
plt.grid()
out = ms.hb_time(lambda x, v,
params: np.array([[-x - .1 * x**3 - .1 * v + 1 *
sin(params['omega'] * params['cur_time'])]]),
num_variables=1, omega=.7, num_harmonics=1)
out[3][0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mousai can easily recreate the near-continuous response
Step2: Let's sweep through driving frequencies to find a frequency response function
Step3: Two degree of freedom system
Step4: Let's find a response.
Step5: Or a parametric study of response amplitude versus nonlinearity.
Step6: Two degree of freedom system with Coulomb Damping
Step7: Too much Coulomb friction can increase the response.
Step8: But can I solve an equation in one line? Yes!!!
|
13,389
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def rolling_apply(fun, a, w):
r = np.empty(a.shape)
r.fill(np.nan)
for i in range(w - 1, a.shape[0]):
r[i] = fun(a[(i-w+1):i+1])
return r
def rolling_window(a, window):
shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
strides = a.strides + (a.strides[-1],)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
x=np.arange(10).reshape((2,5))
rolling_window(x, 3)
np.mean(rolling_window(x, 3), -1)
def window(arr, shape=(3, 3)):
# Find row and column window sizes
r_win = np.floor(shape[0] / 2).astype(int)
c_win = np.floor(shape[1] / 2).astype(int)
x, y = arr.shape
for i in range(x):
xmin = max(0, i - r_win)
xmax = min(x, i + r_win + 1)
for j in range(y):
ymin = max(0, j - c_win)
ymax = min(y, j + c_win + 1)
yield arr[xmin:xmax, ymin:ymax]
arr = np.array([[1,2,3,4],
[2,3,4,5],
[3,4,5,6],
[4,5,6,7]])
gen = window(arr)
print(arr.shape)
# Print each step of the generator (runs for all elementrs of array)
i=0
while i < (arr.shape[0]*arr.shape[1]):
print(next(gen))
print("")
i+=1
def rolling_window(a, shape):
s = (a.shape[0] - shape[0] + 1,) + (a.shape[1] - shape[1] + 1,) + shape
strides = a.strides + a.strides
return np.lib.stride_tricks.as_strided(a, shape=s, strides=strides)
def window2(arr, shape=(3, 3)):
r_extra = np.floor(shape[0] / 2).astype(int)
c_extra = np.floor(shape[1] / 2).astype(int)
out = np.empty((arr.shape[0] + 2 * r_extra, arr.shape[1] + 2 * c_extra))
out[:] = np.nan
out[r_extra:-r_extra, c_extra:-c_extra] = arr
view = rolling_window(out, shape)
return view
window2(arr, (3,3))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A loop in Python are however very slow compared to a loop in C code. Fortunately there is a trick to make NumPy perform this looping internally in C code. This is achieved by adding an extra dimension with the same size as the window and an appropriate stride
Step2: Using this function it is easy to calculate for example a rolling mean without looping in Python
Step3: More info on strides here
Step4: It's not vectorized, but I'm not sure there is an existing vectorized function that returns different sized arrays. As @PaulPanzer points out you could pad your array to the size you need and use a np.lib.stride_tricks.as_strided to generate a view of the slices. Something like so
|
13,390
|
<ASSISTANT_TASK:>
Python Code:
import joblib
features=joblib.load('clean_LCfeatures.p')
labels=joblib.load('clean_LClabels.p')
clabels=joblib.load('clean_LCclassifierlabel.p')
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualization code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
features.head(n=10)
features['earliest_cr_line']=features.earliest_cr_line.dt.year
features.earliest_cr_line.dtype
import imp
imp.reload(vs)
vs.distribution(features)
valuespread={'annual_inc':max(features.annual_inc),'delinqmax':max(features.delinq_2yrs),
'openmax':max(features.open_acc), 'pubmax':max(features.pub_rec),
'revolbalmax':max(features.revol_bal),'revolutilmax':max(features.revol_util),
'collectmax':max(features.collections_12_mths_ex_med)}
valuespread
features.query('revol_bal>1000000.0')
features.query('revol_util>120')
features.drop([294407],inplace=True)
labels.drop([294407],inplace=True)
clabels.drop([294407],inplace=True)
imp.reload(vs)
vs.distribution(features)
# Log-transform the skewed features
skewed = ['annual_inc','delinq_2yrs','open_acc', 'pub_rec','revol_bal','total_acc', 'collections_12_mths_ex_med']
features_raw=features.copy()
features_raw[skewed] = features[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
imp.reload(vs)
vs.distribution(features_raw, transformed = True)
# import scipy.stats
# winsors=['delinq_2yrs','pub_rec','collections_12_mths_ex_med']
# for feature in winsors:
# winsor[feature]=scipy.stats.mstats.winsorize(features[feature], limits=0.01,axis=1)
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler()
numerical = ['loan_amnt','emp_length', 'annual_inc','dti','delinq_2yrs','earliest_cr_line','inq_last_6mths','open_acc', 'pub_rec','revol_bal','revol_util','total_acc', 'collections_12_mths_ex_med']
features_raw[numerical] = scaler.fit_transform(features_raw[numerical])
# Show an example of a record with scaling applied
display(features_raw.head(n = 10))
features_raw
features_raw=features_raw.drop(['emp_title','addr_state'],axis=1)
features_raw.dtypes
# One-hot encode the 'features_raw' data using pandas.get_dummies()
feat = pd.get_dummies(features_raw)
#print(income.head(n=10))
# Print the number of features after one-hot encoding
encoded = list(feat.columns)
print ("{} total features after one-hot encoding.".format(len(encoded)))
# Uncomment the following line to see the encoded feature names
print (encoded)
# Import train_test_split
from sklearn.model_selection import train_test_split #sklearn 0.18.1 and up
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(feat, labels, test_size = 0.2, random_state = 1)
# Show the results of the split
print ("Training set has {} samples.".format(X_train.shape[0]))
print ("Testing set has {} samples.".format(X_test.shape[0]))
from sklearn.metrics import r2_score, mean_squared_error
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# Fit the learner to the training data using slicing with 'sample_size'
start = time() # Get start time
learner = clf.fit(X_train.sample(n=sample_size,random_state=1),y_train.sample(n=sample_size,random_state=1)) #df.sample(frac=percent) maybe too?
end = time() # Get end time
# Calculate the training time
results['train_time'] = end-start
# Get the predictions on the test set,
# then get predictions on the first 300 training samples
start = time() # Get start time
predictions_test = clf.predict(X_test)
predictions_train = clf.predict(X_train[:300])
end = time() # Get end time
# Calculate the total prediction time
results['pred_time'] = end-start
# Compute mean square error on the first 300 training samples
results['mse_train'] = mean_squared_error(y_train[:300],predictions_train)
# Compute mean square error on test set
results['mse_test'] = mean_squared_error(y_test,predictions_test)
# Compute R^2 on the the first 300 training samples
results['R2_train'] = r2_score(y_train[:300],predictions_train)
# Compute R^2 on the test set
results['R2_test'] = r2_score(y_test,predictions_test)
# Success
print ("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# Return the results
return results
from sklearn.svm import SVR
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
# Initialize models
clf_A = make_pipeline(PolynomialFeatures(2,interaction_only=True),LinearRegression())
clf_B = GradientBoostingRegressor(random_state=2)
clf_C = Ridge()
# Calculate the number of samples for 1%, 10%, and 100% of the training data
samples_1 = len(y_train.sample(frac=.01))
samples_10 = len(y_train.sample(frac=.1))
samples_100 = len(y_train.sample(frac=1))
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]: #clf_A,
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# print(results[clf])
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results,.5,.5)
results
from sklearn.tree import DecisionTreeRegressor
results2={}
clf_D = DecisionTreeRegressor(random_state=1)
clf_D.fit(X_train,y_train)
predictions_test = clf_D.predict(X_test)
predictions_train = clf_D.predict(X_train[:3000])
end = time() # Get end time
# Calculate the total prediction time
# Compute mean square error on the first 300 training samples
results2['mse_train'] = mean_squared_error(y_train[:3000],predictions_train)
# Compute mean square error on test set
results2['mse_test'] = mean_squared_error(y_test,predictions_test)
# Compute R^2 on the the first 300 training samples
results2['R2_train'] = r2_score(y_train[:3000],predictions_train)
# Compute R^2 on the test set
results2['R2_test'] = r2_score(y_test,predictions_test)
results2
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
# Initialize the classifier
clf = DecisionTreeRegressor(random_state=2)
# Create the parameters list you wish to tune
parameters = {'max_depth':(4,6,20,None),'min_samples_split':(2,50),'min_samples_leaf':(1,101)} #',,
# Make an R2_score scoring object
scorer = make_scorer(r2_score)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method
grid_obj = GridSearchCV(clf,parameters,scoring=scorer,n_jobs=-1)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_fit = grid_obj.fit(X_train,y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print ("Unoptimized model\n------")
print ("MSE score on testing data: {:.4f}".format(mean_squared_error(y_test, predictions)))
print ("R2-score on testing data: {:.4f}".format(r2_score(y_test, predictions)))
print ("\nOptimized Model\n------")
print ("Final MSE score on the testing data: {:.4f}".format(mean_squared_error(y_test, best_predictions)))
print ("Final R2-score on the testing data: {:.4f}".format(r2_score(y_test, best_predictions)))
best_clf
# from sklearn.model_selection import GridSearchCV
# from sklearn.metrics import make_scorer
# # Initialize the classifier
# clf = GradientBoostingRegressor(random_state=2)
# # Create the parameters list you wish to tune
# parameters = {'learning_rate':(0.01,0.1), 'max_depth':(3,6)}
# # Make an R2_score scoring object
# scorer = make_scorer(r2_score)
# # TODO: Perform grid search on the classifier using 'scorer' as the scoring method
# grid_obj = GridSearchCV(clf,parameters,scoring=scorer,n_jobs=-1)
# # TODO: Fit the grid search object to the training data and find the optimal parameters
# grid_fit = grid_obj.fit(X_train,y_train)
# # Get the estimator
# best_clf = grid_fit.best_estimator_
# # Make predictions using the unoptimized and model
# predictions = (clf.fit(X_train, y_train)).predict(X_test)
# best_predictions = best_clf.predict(X_test)
# # Report the before-and-afterscores
# print ("Unoptimized model\n------")
# print ("MSE score on testing data: {:.4f}".format(mean_squared_error(y_test, predictions)))
# print ("R2-score on testing data: {:.4f}".format(r2_score(y_test, predictions)))
# print ("\nOptimized Model\n------")
# print ("Final MSE score on the testing data: {:.4f}".format(mean_squared_error(y_test, best_predictions)))
# print ("Final R2-score on the testing data: {:.4f}".format(r2_score(y_test, best_predictions)))
best_clf
# Import train_test_split
from sklearn.model_selection import train_test_split #sklearn 0.18.1 and up
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(feat, clabels, test_size = 0.2, random_state = 1)
# Show the results of the split
print ("Training set has {} samples.".format(X_train.shape[0]))
print ("Testing set has {} samples.".format(X_test.shape[0]))
from sklearn.metrics import accuracy_score,fbeta_score, roc_auc_score, cohen_kappa_score, precision_score
allones=np.ones(len(clabels))
acc=accuracy_score(clabels,allones)
fbeta=fbeta_score(clabels,allones,beta=.5)
auc=roc_auc_score(clabels,allones)
cohen=cohen_kappa_score(clabels,allones)
prec=precision_score(clabels,allones)
# Print the results
print ("Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}, ROC AUC: {:.4f}, Cohen's k: {:.4f}, precision: {:.4f}]".format(acc, fbeta,auc,cohen,prec))
from sklearn.metrics import fbeta_score, roc_auc_score, cohen_kappa_score
def train_predictclass(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
resultsclass = {}
# Fit the learner to the training data using slicing with 'sample_size'
start = time() # Get start time
learner = clf.fit(X_train.sample(n=sample_size,random_state=1),y_train.sample(n=sample_size,random_state=1)) #df.sample(frac=percent) maybe too?
end = time() # Get end time
# Calculate the training time
resultsclass['train_time'] = end-start
# Get the predictions on the test set,
# then get predictions on the first 300 training samples
start = time() # Get start time
predictions_test = clf.predict(X_test)
predictions_train = clf.predict(X_train[:300])
end = time() # Get end time
# Calculate the total prediction time
resultsclass['pred_time'] = end-start
# Compute accuracy on the first 300 training samples
resultsclass['ROC_AUC_train'] = roc_auc_score(y_train[:300],predictions_train)
# Compute accuracy on test set
resultsclass['ROC_AUC_test'] = roc_auc_score(y_test,predictions_test)
# Compute F-score on the the first 300 training samples
resultsclass['precision_train'] = precision_score(y_train[:300],predictions_train)
# Compute F-score on the test set
resultsclass['precision_test'] = precision_score(y_test,predictions_test)
# Success
print ("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# Return the results
return resultsclass
# Initialize models
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.naive_bayes import GaussianNB
# TODO: Initialize the three models
clf_A = GaussianNB()
clf_B = DecisionTreeClassifier(random_state=5)
clf_C = RandomForestClassifier(n_estimators=50)
# Calculate the number of samples for 1%, 10%, and 100% of the training data
samples_1 = len(y_train.sample(frac=.01))
samples_10 = len(y_train.sample(frac=.1))
samples_100 = len(y_train.sample(frac=1))
# Collect results on the learners
resultsclass = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
resultsclass[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
resultsclass[clf_name][i] = \
train_predictclass(clf, samples, X_train, y_train, X_test, y_test)
# print(results[clf])
# Run metrics visualization for the three supervised learning models chosen
imp.reload(vs)
vs.classevaluate(resultsclass,auc,prec)
# Initialize the classifier
clf = DecisionTreeClassifier(random_state=3)
# Create the parameters list you wish to tune
parameters = {'max_depth':(20,50, None),'max_leaf_nodes':(100,300,None)}
# Make an R2_score scoring object
scorer = make_scorer(precision_score)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method
grid_obj = GridSearchCV(clf,parameters,scoring=scorer,n_jobs=-1)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_fit = grid_obj.fit(X_train,y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print ("Unoptimized model\n------")
print ("ROC AUC score on testing data: {:.4f}".format(roc_auc_score(y_test, predictions)))
print ("Precision on testing data: {:.4f}".format(precision_score(y_test, predictions)))
print ("\nOptimized Model\n------")
print ("Final ROC AUC score on the testing data: {:.4f}".format(roc_auc_score(y_test, best_predictions)))
print ("Final Precision on the testing data: {:.4f}".format(precision_score(y_test, best_predictions)))
from sklearn.model_selection import cross_val_score
scores = cross_val_score(best_clf,X_train, y_train, cv=20,scoring=scorer, n_jobs=-1)
scores
difference=scores.mean()-prec
import math
from scipy import stats,special
zscore=difference/(scores.std())
p_values = 1-special.ndtr(zscore)
print(zscore,p_values)
(scores.std()/math.sqrt(10))
scores.std()*100
import joblib
joblib.dump(clf, 'decisiontree.p')
features.columns
# from sklearn import tree
# import pydotplus
# dot_data = tree.export_graphviz(best_clf, out_file=None,feature_names=feat.columns,class_names=['Bad Loan','Fully Paid'])
# graph = pydotplus.graph_from_dot_data(dot_data)
# graph.write_pdf("LC_decisiontree.pdf")
best_clf.feature_importances_
importances = best_clf.feature_importances_
# Plot
import importlib
importlib.reload(vs)
vs.feature_plot(importances, X_train, y_train)
best_clf.tree_.node_count
indices = np.argsort(importances)[::-1]
columns = X_train.columns.values[indices[:10]]
values = importances[indices][:10]
np.cumsum(values)
values
columns
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some weird scaling going on, just to check my sanity, check the max values of a few of the features. these may need to be log scaled just to deal with the large values. Income almost always is going to need log scaling anyway.
Step2: There are some crazy revolving balances and utilizations. I need to check those quickly. Everything else looks extreme, but within ranges I'd assume could exist.
Step3: I guess these are pretty understandable, high paying jobs and income.
Step4: Having a utilization over 100% is odd, but having a utilization of almost 900% is crazy! I am going to just remove this one loan - not sure what is going on with this outlier. The others are clearly weird, but there doesn't seem to be an obvious cutoff beyond removing the huge outlier.
Step5: Utilization looks much more normal!
Step6: We see a clear improvement in income, open accounts, and revolving balance. The others are improved but still are dominated by low numbers. That's okay for now. There are extreme outliers but they are important. I am tempted to winsorize, but the values are so heavily weighted toward 0, that the winsorization would affect all values greater than zero.
Step7: Normalization
Step8: One Hot Encoding
Step9: Great! There are the original continuous features, and now a feature for all types of home ownership, purposes for the loan, and different address states!
Step10: Testing different models
Step11: Checking the polynomial regression
Step12: Regression Postmortem
Step13: Overfitting with Decision Trees - Success?
Step14: Interestingly, the best boosted regressor is the "stock" one. It is pretty poor. I think it is time to try classification instead of regression
Step15: Naive Predictor
Step16: Wow! accuracy and F-score are pretty high. likely, this is because there is an order of magnitude more fully paid loans than others. This can be seen in the data cleaning notebook! Interestingly, ROC AUC and Cohen's k show how poor a naive classifier should be. I'll use those to judge my models.
Step17: Results
Step18: Interestingly, this doesn't really improve the precision, there may not be enough bad loan data to give the model something to hang on to, and potentially, it could be that we just don't have the data collected that would be more predictive. Because it is crazy simple, and it actually performs better at reducing false positives than others, at least with this training/test set. It is worth noting that the 0.0095% increase in accuracy over the naive classifier is very small but better than nothing.
|
13,391
|
<ASSISTANT_TASK:>
Python Code:
def lstrip(iterable, strip_value):
Return iterable with strip_value removed from the beginnning
stripped = []
iterator = iter(iterable)
for item in iterator:
if not item == strip_value:
stripped.append(item)
break
for item in iterator:
stripped.append(item)
return stripped
import unittest
class LStripTests(unittest.TestCase):
Tests for lstrip.
def assertIterableEqual(self, iterable1, iterable2):
self.assertEqual(list(iterable1), list(iterable2))
def test_list(self):
self.assertIterableEqual(lstrip([1, 1, 2, 3], 1), [2, 3])
def test_nothing_to_strip(self):
self.assertIterableEqual(lstrip([1, 2, 3], 0), [1, 2, 3])
def test_string(self):
self.assertIterableEqual(lstrip(' hello', ' '), 'hello')
def test_empty_iterable(self):
self.assertIterableEqual(lstrip([], 1), [])
def test_strip_all(self):
self.assertIterableEqual(lstrip([1, 1, 1], 1), [])
def test_none_values(self):
self.assertIterableEqual(lstrip([None, 1, 2, 3], 0), [None, 1, 2, 3])
def test_iterator(self):
squares = (n**2 for n in [0, 0, 1, 2, 3])
self.assertIterableEqual(lstrip(squares, 0), [1, 4, 9])
# To test the Bonus part of this exercise, comment out the following line
@unittest.expectedFailure
def test_returns_iterator(self):
stripped = lstrip((1, 2, 3), 1)
self.assertEqual(iter(stripped), iter(stripped))
# To test the Bonus part of this exercise, comment out the following line
@unittest.expectedFailure
def test_function_given(self):
numbers = [0, 2, 4, 1, 3, 5, 6]
def is_even(n): return n % 2 == 0
self.assertIterableEqual(lstrip(numbers, is_even), [1, 3, 5, 6])
if __name__ == "__main__":
unittest.main(argv=['ignore-first-arg'], exit=False)
def lstrip(iterable, strip_value):
Return iterable with strip_value removed from the beginnning
iterator = iter(iterable)
for item in iterator:
if not item == strip_value:
yield item
break
for item in iterator:
yield item
import unittest
class LStripTests(unittest.TestCase):
Tests for lstrip.
def assertIterableEqual(self, iterable1, iterable2):
self.assertEqual(list(iterable1), list(iterable2))
def test_list(self):
self.assertIterableEqual(lstrip([1, 1, 2, 3], 1), [2, 3])
def test_nothing_to_strip(self):
self.assertIterableEqual(lstrip([1, 2, 3], 0), [1, 2, 3])
def test_string(self):
self.assertIterableEqual(lstrip(' hello', ' '), 'hello')
def test_empty_iterable(self):
self.assertIterableEqual(lstrip([], 1), [])
def test_strip_all(self):
self.assertIterableEqual(lstrip([1, 1, 1], 1), [])
def test_none_values(self):
self.assertIterableEqual(lstrip([None, 1, 2, 3], 0), [None, 1, 2, 3])
def test_iterator(self):
squares = (n**2 for n in [0, 0, 1, 2, 3])
self.assertIterableEqual(lstrip(squares, 0), [1, 4, 9])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_returns_iterator(self):
stripped = lstrip((1, 2, 3), 1)
self.assertEqual(iter(stripped), iter(stripped))
# To test the Bonus part of this exercise, comment out the following line
@unittest.expectedFailure
def test_function_given(self):
numbers = [0, 2, 4, 1, 3, 5, 6]
def is_even(n): return n % 2 == 0
self.assertIterableEqual(lstrip(numbers, is_even), [1, 3, 5, 6])
if __name__ == "__main__":
unittest.main(argv=['ignore-first-arg'], exit=False)
my_list = [1, 2, 3, 4]
iterator1 = iter(my_list)
iterator2 = iter(my_list)
iterator1 == iterator2
my_list = [1, 2, 3, 4]
iterator = iter(my_list)
iterator1 = iter(iterator)
iterator2 = iter(iterator)
iterator1 == iterator2
def lstrip(iterable, strip_value):
Return iterable with strip_value items removed from beginning.
iterator = iter(iterable)
for item in iterator:
if (callable(strip_value) and not strip_value(item)
or item != strip_value):
yield item
break
for item in iterator:
yield item
import unittest
class LStripTests(unittest.TestCase):
Tests for lstrip.
def assertIterableEqual(self, iterable1, iterable2):
self.assertEqual(list(iterable1), list(iterable2))
def test_list(self):
self.assertIterableEqual(lstrip([1, 1, 2, 3], 1), [2, 3])
def test_nothing_to_strip(self):
self.assertIterableEqual(lstrip([1, 2, 3], 0), [1, 2, 3])
def test_string(self):
self.assertIterableEqual(lstrip(' hello', ' '), 'hello')
def test_empty_iterable(self):
self.assertIterableEqual(lstrip([], 1), [])
def test_strip_all(self):
self.assertIterableEqual(lstrip([1, 1, 1], 1), [])
def test_none_values(self):
self.assertIterableEqual(lstrip([None, 1, 2, 3], 0), [None, 1, 2, 3])
def test_iterator(self):
squares = (n**2 for n in [0, 0, 1, 2, 3])
self.assertIterableEqual(lstrip(squares, 0), [1, 4, 9])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_returns_iterator(self):
stripped = lstrip((1, 2, 3), 1)
self.assertEqual(iter(stripped), iter(stripped))
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_function_given(self):
numbers = [0, 2, 4, 1, 3, 5, 6]
def is_even(n): return n % 2 == 0
self.assertIterableEqual(lstrip(numbers, is_even), [1, 3, 5, 6])
if __name__ == "__main__":
unittest.main(argv=['ignore-first-arg'], exit=False)
def lstrip(iterable, strip_value):
Return iterable with strip_value items removed from beginning.
iterator = iter(iterable)
if callable(strip_value):
predicate = strip_value
else:
def predicate(value): return value == strip_value
for item in iterator:
if not predicate(item):
yield item
break
for item in iterator:
yield item
import unittest
class LStripTests(unittest.TestCase):
Tests for lstrip.
def assertIterableEqual(self, iterable1, iterable2):
self.assertEqual(list(iterable1), list(iterable2))
def test_list(self):
self.assertIterableEqual(lstrip([1, 1, 2, 3], 1), [2, 3])
def test_nothing_to_strip(self):
self.assertIterableEqual(lstrip([1, 2, 3], 0), [1, 2, 3])
def test_string(self):
self.assertIterableEqual(lstrip(' hello', ' '), 'hello')
def test_empty_iterable(self):
self.assertIterableEqual(lstrip([], 1), [])
def test_strip_all(self):
self.assertIterableEqual(lstrip([1, 1, 1], 1), [])
def test_none_values(self):
self.assertIterableEqual(lstrip([None, 1, 2, 3], 0), [None, 1, 2, 3])
def test_iterator(self):
squares = (n**2 for n in [0, 0, 1, 2, 3])
self.assertIterableEqual(lstrip(squares, 0), [1, 4, 9])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_returns_iterator(self):
stripped = lstrip((1, 2, 3), 1)
self.assertEqual(iter(stripped), iter(stripped))
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_function_given(self):
numbers = [0, 2, 4, 1, 3, 5, 6]
def is_even(n): return n % 2 == 0
self.assertIterableEqual(lstrip(numbers, is_even), [1, 3, 5, 6])
if __name__ == "__main__":
unittest.main(argv=['ignore-first-arg'], exit=False)
from itertools import dropwhile
def lstrip(iterable, strip_value):
Return iterable with strip_value items removed from beginning.
def is_strip_value(item): return item == strip_value
return dropwhile(is_strip_value, iterable)
import unittest
class LStripTests(unittest.TestCase):
Tests for lstrip.
def assertIterableEqual(self, iterable1, iterable2):
self.assertEqual(list(iterable1), list(iterable2))
def test_list(self):
self.assertIterableEqual(lstrip([1, 1, 2, 3], 1), [2, 3])
def test_nothing_to_strip(self):
self.assertIterableEqual(lstrip([1, 2, 3], 0), [1, 2, 3])
def test_string(self):
self.assertIterableEqual(lstrip(' hello', ' '), 'hello')
def test_empty_iterable(self):
self.assertIterableEqual(lstrip([], 1), [])
def test_strip_all(self):
self.assertIterableEqual(lstrip([1, 1, 1], 1), [])
def test_none_values(self):
self.assertIterableEqual(lstrip([None, 1, 2, 3], 0), [None, 1, 2, 3])
def test_iterator(self):
squares = (n**2 for n in [0, 0, 1, 2, 3])
self.assertIterableEqual(lstrip(squares, 0), [1, 4, 9])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_returns_iterator(self):
stripped = lstrip((1, 2, 3), 1)
self.assertEqual(iter(stripped), iter(stripped))
# To test the Bonus part of this exercise, comment out the following line
@unittest.expectedFailure
def test_function_given(self):
numbers = [0, 2, 4, 1, 3, 5, 6]
def is_even(n): return n % 2 == 0
self.assertIterableEqual(lstrip(numbers, is_even), [1, 3, 5, 6])
if __name__ == "__main__":
unittest.main(argv=['ignore-first-arg'], exit=False)
from itertools import dropwhile
def lstrip(iterable, strip_value):
Return iterable with strip_value items removed from beginning.
if callable(strip_value):
predicate = strip_value
else:
def predicate(item): return item == strip_value
return dropwhile(predicate, iterable)
import unittest
class LStripTests(unittest.TestCase):
Tests for lstrip.
def assertIterableEqual(self, iterable1, iterable2):
self.assertEqual(list(iterable1), list(iterable2))
def test_list(self):
self.assertIterableEqual(lstrip([1, 1, 2, 3], 1), [2, 3])
def test_nothing_to_strip(self):
self.assertIterableEqual(lstrip([1, 2, 3], 0), [1, 2, 3])
def test_string(self):
self.assertIterableEqual(lstrip(' hello', ' '), 'hello')
def test_empty_iterable(self):
self.assertIterableEqual(lstrip([], 1), [])
def test_strip_all(self):
self.assertIterableEqual(lstrip([1, 1, 1], 1), [])
def test_none_values(self):
self.assertIterableEqual(lstrip([None, 1, 2, 3], 0), [None, 1, 2, 3])
def test_iterator(self):
squares = (n**2 for n in [0, 0, 1, 2, 3])
self.assertIterableEqual(lstrip(squares, 0), [1, 4, 9])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_returns_iterator(self):
stripped = lstrip((1, 2, 3), 1)
self.assertEqual(iter(stripped), iter(stripped))
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_function_given(self):
numbers = [0, 2, 4, 1, 3, 5, 6]
def is_even(n): return n % 2 == 0
self.assertIterableEqual(lstrip(numbers, is_even), [1, 3, 5, 6])
if __name__ == "__main__":
unittest.main(argv=['ignore-first-arg'], exit=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Using the iterator protocol
Step5: Bonus1
Step6: Experiments
Step11: Bonus2
Step14: Using dropwhile helper function in itertools module
Step17: Bonus2
|
13,392
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import patsy as ps
from statsmodels.sandbox.regression.gmm import IV2SLS
import os, sys
from dowhy import CausalModel
n_points = 1000
education_abilty = 1
education_voucher = 2
income_abilty = 2
income_education = 4
# confounder
ability = np.random.normal(0, 3, size=n_points)
# instrument
voucher = np.random.normal(2, 1, size=n_points)
# treatment
education = np.random.normal(5, 1, size=n_points) + education_abilty * ability +\
education_voucher * voucher
# outcome
income = np.random.normal(10, 3, size=n_points) +\
income_abilty * ability + income_education * education
# build dataset (exclude confounder `ability` which we assume to be unobserved)
data = np.stack([education, income, voucher]).T
df = pd.DataFrame(data, columns = ['education', 'income', 'voucher'])
#Step 1: Model
model=CausalModel(
data = df,
treatment='education',
outcome='income',
common_causes=['U'],
instruments=['voucher']
)
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
# Step 2: Identify
identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
# Step 3: Estimate
#Choose the second estimand: using IV
estimate = model.estimate_effect(identified_estimand,
method_name="iv.instrumental_variable", test_significance=True)
print(estimate)
# Step 4: Refute
ref = model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") # only permute placebo_type works with IV estimate
print(ref)
income_vec, endog = ps.dmatrices("income ~ education", data=df)
exog = ps.dmatrix("voucher", data=df)
m = IV2SLS(income_vec, endog, exog).fit()
m.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the dataset
Step2: Using DoWhy to estimate the causal effect of education on future income
Step3: We have an estimate, indicating that increasing education by one unit increases income by 4 points.
Step4: The refutation gives confidence that the estimate is not capturing any noise in the data.
|
13,393
|
<ASSISTANT_TASK:>
Python Code:
import varout.layers
import varout.objectives
import varout.experiments
import lasagne.layers
import lasagne.nonlinearities
import lasagne.init
import theano
import theano.tensor as T
import numpy as np
import holonets
import holoviews as hv
%load_ext holoviews.ipython
dataset = varout.experiments.load_data()
dataset['X_train'] = np.vstack([dataset['X_train'],dataset['X_valid']])
dataset['y_train'] = np.hstack([dataset['y_train'],dataset['y_valid']])
print dataset['X_train'].shape
# load dataset as shared variable
dataset = holonets.monitor.enforce_shared(dataset, T.matrix, T.ivector)
rng = np.random.RandomState(42)
def salimans_sample(shape):
w = rng.normal(0,1,(shape[0],shape[1]))
w -= np.mean(w, axis=0, keepdims=True)
w /= np.sqrt(np.sum(np.square(w), axis=0, keepdims=True))
return w.astype(theano.config.floatX)
batch_size, input_dim, n_hidden, output_dim = 200, 784, 1024, 10
l_in = lasagne.layers.InputLayer((batch_size, input_dim))
l_hidden1 = lasagne.layers.DenseLayer(l_in, num_units=n_hidden, nonlinearity=lasagne.nonlinearities.rectify,
W=salimans_sample)
l_drop1 = varout.layers.VariationalDropoutB(l_hidden1, p=0.2, adaptive='weightwise')
l_hidden2 = lasagne.layers.DenseLayer(l_drop1, num_units=n_hidden, nonlinearity=lasagne.nonlinearities.rectify,
W=salimans_sample)
l_drop2 = varout.layers.VariationalDropoutB(l_hidden2, p=0.5, adaptive='weightwise')
l_hidden3 = lasagne.layers.DenseLayer(l_drop2, num_units=n_hidden, nonlinearity=lasagne.nonlinearities.rectify,
W=salimans_sample)
l_drop3 = varout.layers.VariationalDropoutB(l_hidden3, p=0.5, adaptive='weightwise')
l_hidden4 = lasagne.layers.DenseLayer(l_drop3, num_units=output_dim,
nonlinearity=lasagne.nonlinearities.softmax,
W=salimans_sample)
l_out = varout.layers.VariationalDropoutB(l_hidden4, p=0.5, adaptive='weightwise')
def set_up_training(l_out, dataset, squash_updates=False, N_train=60000, N_test=10000):
expressions = holonets.monitor.Expressions(l_out, dataset, update_rule=lasagne.updates.adam,
loss_function=lasagne.objectives.categorical_crossentropy,
loss_aggregate=T.mean,
extra_loss=-varout.objectives.priorKL(l_out)/N_train,
learning_rate=0.001)
# add channel to monitor loss and accuracy on training and test
expressions.add_channel(**expressions.loss('train', False))
expressions.add_channel(**expressions.accuracy('train', False))
expressions.add_channel(**expressions.loss('test', True))
expressions.add_channel(**expressions.accuracy('test', True))
expressions.add_channel(name='cross-entropy loss', dimension='Loss',
expression=T.mean(
lasagne.objectives.categorical_crossentropy(expressions.network_output, expressions.y_batch)),
function='train')
expressions.add_channel(name='DKL', dimension='Loss',
expression=-varout.objectives.priorKL(l_out)/N_train, function='train')
# would like to track the various alphas
for i, dlayer in enumerate([l for l in lasagne.layers.get_all_layers(l_out)
if isinstance(l, varout.layers.VariationalDropout)]):
expressions.add_channel(name='Dropout Layer {0} Mean Alpha'.format(i+1),
dimension='Alpha', function='train',
expression=T.mean(T.nnet.sigmoid(dlayer.logitalpha)))
expressions.add_channel(name='Dropout Layer {0} Sigma Alpha'.format(i+1),
dimension='Alpha', function='train',
expression=T.sqrt(T.var(T.nnet.sigmoid(dlayer.logitalpha))))
channels = expressions.build_channels()
train = holonets.train.Train(channels, n_batches={'train': N_train//batch_size,
'test': N_test//batch_size})
loop = holonets.run.EpochLoop(train, dimensions=train.dimensions)
return loop
loop = set_up_training(l_out, dataset)
localA10 = loop.run(10)
localA10.layout('Channel')
def wp_comparisons(l_out):
all_layers = lasagne.layers.get_all_layers(l_out)
hmap = hv.HoloMap(kdims=['Layer', 'Noise Variance/Weight'])
for i in range(1,9,2):
hmap[(i, 'Weight')] = hv.Image(all_layers[i].W.eval(), vdims=['Weight'])
hmap[(i, 'Noise Variance')] = hv.Image(T.nnet.sigmoid(all_layers[i+1].logitalpha).eval(), vdims=['Alpha'])
return hmap
examples = []
for i in range(16):
examples.append(hv.Image(T.nnet.sigmoid(l_drop1.logitalpha).eval()[:,i].reshape(28,28)).hist())
hv.Layout(examples)
wp_comparisons(l_out).layout(['Layer', 'Noise Variance/Weight'])
def read_gradient_variance(l_out, dataset, batch_size, deterministic=False, N_train=60000):
# monitor the gradient wrt top and bottom weight matrices:
all_layers = [l for l in lasagne.layers.get_all_layers(l_out)
if isinstance(l, lasagne.layers.DenseLayer)]
bottom_dense = all_layers[0]
top_dense = all_layers[-1]
X_batch, y_batch = T.matrix('X'), T.ivector('y')
netout = lasagne.layers.get_output(l_out, X_batch, deterministic=deterministic)
lowerbound = T.mean(lasagne.objectives.categorical_crossentropy(netout, y_batch))\
- varout.objectives.priorKL(l_out)/N_train
bottom_grads = T.grad(lowerbound, bottom_dense.W)
top_grads = T.grad(lowerbound, top_dense.W)
# run this over the whole training set
batch_index = T.iscalar('index')
batch_slice = slice(batch_index*batch_size, (batch_index+1)*batch_size)
eval_grads = theano.function([batch_index], [bottom_grads, top_grads],
givens={X_batch: dataset['X_train'][batch_slice],
y_batch: dataset['y_train'][batch_slice]})
bgrads = []
tgrads = []
n_batches = N_train//batch_size
for i in range(n_batches):
b,t = eval_grads(i)
bgrads.append(np.array(b))
tgrads.append(np.array(t))
bgrads, tgrads = np.array(bgrads), np.array(tgrads)
# take the variance along minibatches
corrected_var = lambda x: ((N_train*(x-x.mean(axis=0)))**2).mean(axis=0)
bvar, tvar = corrected_var(bgrads), corrected_var(tgrads)
# then average this
return np.mean(bvar), np.mean(tvar)
meangradients = {}
meangradients['indep bottom 10'], meangradients['indep top 10'] = read_gradient_variance(l_out, dataset, batch_size)
print "Mean Gradient variance at bottom layer: {0}".format(meangradients['indep bottom 10'])
print "Mean Gradient variance at top layer: {0}".format(meangradients['indep top 10'])
indep10params = lasagne.layers.get_all_param_values(l_out)
%%time
loop.run(90).layout('Channel')
examples = []
for i in range(16):
examples.append(hv.Image(T.nnet.sigmoid(l_drop1.logitalpha).eval()[:,i].reshape(28,28)).hist())
hv.Layout(examples)
wp_comparisons(l_out).layout(['Layer', 'Noise Variance/Weight'])
lasagne.layers.set_all_param_values(l_out, indep100params)
meangradients['indep bottom 100'], meangradients['indep top 100'] = read_gradient_variance(l_out, dataset, batch_size)
print "Mean Gradient variance at bottom layer: {0}".format(meangradients['indep bottom 100'])
print "Mean Gradient variance at top layer: {0}".format(meangradients['indep top 100'])
indep100params = lasagne.layers.get_all_param_values(l_out)
import pickle
import gzip
with gzip.open("indep_params.pkl.gz", "wb") as f:
pickle.dump([indep10params, indep100params], f)
import pickle
import gzip
with gzip.open("indep_params.pkl.gz", "rb") as f:
indep10params, indep100params = pickle.load(f)
lasagne.layers.set_all_param_values(l_out, indep10params)
meangradients['no dropout bottom 10'], meangradients['no dropout top 10'] = \
read_gradient_variance(l_out, dataset, batch_size, deterministic=True)
print "Mean Gradient variance at bottom layer: {0}".format(meangradients['no dropout bottom 10'])
print "Mean Gradient variance at top layer: {0}".format(meangradients['no dropout top 10'])
lasagne.layers.set_all_param_values(l_out, indep100params)
meangradients['no dropout bottom 100'], meangradients['no dropout top 100'] = \
read_gradient_variance(l_out, dataset, batch_size, deterministic=True)
print "Mean Gradient variance at bottom layer: {0}".format(meangradients['no dropout bottom 100'])
print "Mean Gradient variance at top layer: {0}".format(meangradients['no dropout top 100'])
l_in = lasagne.layers.InputLayer((batch_size, input_dim))
l_hidden1 = varout.layers.SingleWeightSample(l_in, num_units=n_hidden, nonlinearity=lasagne.nonlinearities.rectify,
W=salimans_sample)
l_hidden1.logitalpha = l_drop1.logitalpha
l_hidden2 = varout.layers.SingleWeightSample(l_hidden1, num_units=n_hidden, nonlinearity=lasagne.nonlinearities.rectify,
W=salimans_sample)
l_hidden2.logitalpha = l_drop2.logitalpha
l_hidden3 = varout.layers.SingleWeightSample(l_hidden2, num_units=n_hidden, nonlinearity=lasagne.nonlinearities.rectify,
W=salimans_sample)
l_hidden3.logitalpha = l_drop3.logitalpha
l_hidden4 = varout.layers.SingleWeightSample(l_hidden3, num_units=output_dim,
nonlinearity=lasagne.nonlinearities.softmax,
W=salimans_sample)
l_hidden4.logitalpha = l_out.logitalpha
single10params = [indep10params[i] for i in sorted(range(0,12,3)+range(1,12,3))]
single100params= [indep100params[i] for i in sorted(range(0,12,3)+range(1,12,3))]
lasagne.layers.set_all_param_values(l_hidden4, single10params)
meangradients['single sample bottom 10'], meangradients['single sample top 10'] = \
read_gradient_variance(l_hidden4, dataset, batch_size)
print "Mean Gradient variance at bottom layer: {0}".format(meangradients['single sample bottom 10'])
print "Mean Gradient variance at top layer: {0}".format(meangradients['single sample top 10'])
lasagne.layers.set_all_param_values(l_hidden4, single100params)
meangradients['single sample bottom 100'], meangradients['single sample top 100'] = \
read_gradient_variance(l_hidden4, dataset, batch_size)
print "Mean Gradient variance at bottom layer: {0}".format(meangradients['single sample bottom 100'])
print "Mean Gradient variance at top layer: {0}".format(meangradients['single sample top 100'])
l_in = lasagne.layers.InputLayer((1, input_dim))
l_hidden1 = varout.layers.SingleWeightSample(l_in, num_units=n_hidden, nonlinearity=lasagne.nonlinearities.rectify,
W=salimans_sample)
l_hidden1.logitalpha = l_drop1.logitalpha
l_hidden2 = varout.layers.SingleWeightSample(l_hidden1, num_units=n_hidden, nonlinearity=lasagne.nonlinearities.rectify,
W=salimans_sample)
l_hidden2.logitalpha = l_drop2.logitalpha
l_hidden3 = varout.layers.SingleWeightSample(l_hidden2, num_units=n_hidden, nonlinearity=lasagne.nonlinearities.rectify,
W=salimans_sample)
l_hidden3.logitalpha = l_drop3.logitalpha
l_hidden4 = varout.layers.SingleWeightSample(l_hidden3, num_units=output_dim,
nonlinearity=lasagne.nonlinearities.softmax,
W=salimans_sample)
l_hidden4.logitalpha = l_out.logitalpha
lasagne.layers.set_all_param_values(l_hidden4, single10params)
# speeding this up by doing the running averages in GPU memory
# inspired by Salimans code
from lasagne.utils import floatX
N_train, batch_size = 60000, 200
class Struct:
pass
X_batch, y_batch = T.matrix('X'), T.ivector('y')
batch_index = T.iscalar('index')
batch_slice = slice(batch_index, (batch_index+1))
n_batches = N_train//batch_size
t,b = Struct(), Struct() # it's like a dictionary you can tab complete in a notebook...
for s in [t,b]:
if s == b:
l = l_hidden1
elif s == t:
l = l_hidden4
s.mean = theano.shared(T.zeros_like(l.W).eval())
s.batch_mean = theano.shared(T.zeros_like(l.W).eval())
s.var = theano.shared(T.zeros_like(l.W).eval())
s.netout = lasagne.layers.get_output(l_hidden4, X_batch)
s.lowerbound = T.mean(lasagne.objectives.categorical_crossentropy(s.netout, y_batch))\
- varout.objectives.priorKL(l_hidden4)/N_train
s.grad = T.grad(s.lowerbound, l.W)
s.moving_mean = theano.function([batch_index], [],
updates={s.mean: s.mean + floatX(1./N_train)*s.grad},
givens={X_batch: dataset['X_train'][batch_slice],
y_batch: dataset['y_train'][batch_slice]})
s.moving_batch_mean = theano.function([batch_index], [],
updates={s.batch_mean: s.batch_mean + floatX(1./batch_size)*s.grad},
givens={X_batch: dataset['X_train'][batch_slice],
y_batch: dataset['y_train'][batch_slice]})
s.moving_var = theano.function([], [],
updates={s.var: s.var + floatX(1./n_batches)*T.square(floatX(N_train)*(s.batch_mean-s.mean))})
%%time
for i in range(N_train):
for s in [t,b]:
s.moving_mean(i)
for i in range(n_batches):
for s in [t,b]:
s.batch_mean.set_value(np.zeros_like(s.batch_mean.get_value()))
for j in range(i*batch_size, (i+1)*batch_size):
s.moving_batch_mean(j)
s.moving_var()
meangradients['separate samples bottom 10'] = np.mean(b.var.get_value())
meangradients['separate samples top 10'] = np.mean(t.var.get_value())
print "Mean Gradient variance at bottom layer: {0}".format(meangradients['separate samples bottom 10'])
print "Mean Gradient variance at top layer: {0}".format(meangradients['separate samples top 10'])
lasagne.layers.set_all_param_values(l_hidden4, single100params)
for i in range(N_train):
for s in [t,b]:
s.moving_mean(i)
for i in range(n_batches):
for s in [t,b]:
s.batch_mean.set_value(np.zeros_like(s.batch_mean.get_value()))
for j in range(i*batch_size, (i+1)*batch_size):
s.moving_batch_mean(j)
s.moving_var()
meangradients['separate samples bottom 100'] = np.mean(b.var.get_value())
meangradients['separate samples top 100'] = np.mean(t.var.get_value())
print "Mean Gradient variance at bottom layer: {0}".format(meangradients['separate samples bottom 100'])
print "Mean Gradient variance at top layer: {0}".format(meangradients['separate samples top 100'])
def get_variances(meangradients, estimator):
return meangradients[estimator+" top 10"], meangradients[estimator+" bottom 10"],\
meangradients[estimator+" top 100"], meangradients[estimator+" bottom 100"]
print " stochastic gradient estimator | top 10 | bottom 10 | top 100 | bottom 100 "
print "------------------------------------|--------|-----------|---------|------------"
print "local reparameterization | {:.2} | {:.2} | {:.2} | {:.2}".format(*get_variances(meangradients,
"indep"))
print "separate weight samples | {:.2} | {:.2} | {:.2} | {:.2}".format(*get_variances(meangradients,
"separate samples"))
print "single weight sample | {:.2} | {:.2} | {:.2} | {:.2}".format(*get_variances(meangradients,
"single sample"))
print "no dropout | {:.2} | {:.2} | {:.2} | {:.2}".format(*get_variances(meangradients,
"no dropout"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Another quirk of the experiment is that they decided to merge the MNIST validation set into the training set; so we have to validation set
Step2: Independent Weight Noise
Step3: Would actually like to see what kind of weight-wise variances we've learnt
Step4: In the first layer, we see that it's starting to learn to lower the variance in the center of the images to get more information about the digits
Step5: However, in later layers this isn't true, it's just living with an effective dropout rate of 0.5
Step6: Then we store these parameter values and continue training to 100 epochs
Step7: Unfortunately, our variance has increased, rather than decreased with training. Could be due to the increase in noise added to the layers, but this should only affect the bottom layer.
Step8: No dropout noise
Step9: Single Weight Sample
Step10: Separate Weight Samples
Step11: Propagating single examples through means we can't use the above function, because we'll have to store 120,000 large matrices, which will fill up RAM, and because we have to build up batch gradient means from individual samples. So in this case, we'll just write some code to do the same thing again
Step12: Building Table
|
13,394
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_csv("datascience.csv", encoding='gb18030') #注意它的编码是中文GB18030,不是Pandas默认设置的编码,所以此处需要显式指定编码类型,以免出现乱码错误。
# 之后看看数据框的头几行,以确认读取是否正确。
df.head()
#我们看看数据框的长度,以确认数据是否读取完整。
df.shape
import jieba
def chinese_word_cut(mytext):
return " ".join(jieba.cut(mytext))
df["content_cutted"] = df.content.apply(chinese_word_cut)
#执行完毕之后,我们需要查看一下,文本是否已经被正确分词。
df.content_cutted.head()
#文本向量化
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
n_features = 1000
tf_vectorizer = CountVectorizer(strip_accents = 'unicode',
max_features=n_features,
stop_words='english',
max_df = 0.5,
min_df = 10)
tf = tf_vectorizer.fit_transform(df.content_cutted)
#应用LDA方法
from sklearn.decomposition import LatentDirichletAllocation
n_topics = 5
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=50,
learning_method='online',
learning_offset=50.,
random_state=0)
#这一部分工作量较大,程序会执行一段时间,Jupyter Notebook在执行中可能暂时没有响应。等待一会儿就好,不要着急。
lda.fit(tf)
#主题没有一个确定的名称,而是用一系列关键词刻画的。我们定义以下的函数,把每个主题里面的前若干个关键词显示出来:
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
#定义好函数之后,我们暂定每个主题输出前20个关键词。
n_top_words = 20
#以下命令会帮助我们依次输出每个主题的关键词表:
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words)
import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
pyLDAvis.sklearn.prepare(lda, tf, tf_vectorizer)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (1024, 3)
Step2: 我们需要人为设定主题的数量。这个要求让很多人大跌眼镜——我怎么知道这一堆文章里面多少主题?!
Step3: 到这里,LDA已经成功帮我们完成了主题抽取。但是我知道你不是很满意,因为结果不够直观。
|
13,395
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import display, HTML;from string import Template;
HTML('<script src="//d3js.org/d3.v3.min.js" charset="utf-8"></script>')
css_text2 = '''
#main { float: left; width: 750px;}#sidebar { float: right; width: 100px;}#sequence { width: 600px; height: 70px;}#legend { padding: 10px 0 0 3px;}#sequence text, #legend text { font-weight: 400; fill: #000000; font-size: 0.75em;}#graph-div2 { position: relative;}#graph-div2 { stroke: #fff;}#explanation { position: absolute; top: 330px; left: 405px; width: 140px; text-align: center; color: #666; z-index: -1;}#percentage { font-size: 2.3em;}
'''
with open('interactive_circle_cl.js', 'r') as myfile:
data=myfile.read()
js_text_template2 = Template(data)
html_template = Template('''
<style> $css_text </style>
<div id="sequence"></div>
<div id="graph-div2"></div>
<div id="explanation" style="visibility: hidden;">
<span id="percentage"></span><br/>
of patients meet this criteria
</div>
</div>
<script> $js_text </script>
''');
js_text2 = js_text_template2.substitute({'graphdiv': 'graph-div2'});
HTML(html_template.substitute({'css_text': css_text2, 'js_text': js_text2}))
import pandas as pd
table = pd.read_csv(open('./python_scripts/11_primarydiagnosis_tables_catv2_consultLoc.csv','r'))
#Anxiety
table[0:5]
#Appetite
table[5:10]
#Constipation
table[10:15]
#Depression
table[15:20]
#Drowsiness
table[20:25]
#Nausea
table[25:30]
#Pain
table[30:35]
#Shortness
table[35:40]
#Tiredness
table[40:45]
#Well Being
table[45:50]
# PPSScore
table[50:51]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Graphic Interpretation
|
13,396
|
<ASSISTANT_TASK:>
Python Code:
!pip install -r requirements_notebook.txt
!kubectl create namespace cifar10
%%writefile broker.yaml
apiVersion: eventing.knative.dev/v1
kind: broker
metadata:
name: default
namespace: cifar10
!kubectl create -f broker.yaml
%%writefile event-display.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-display
namespace: cifar10
spec:
replicas: 1
selector:
matchLabels: &labels
app: hello-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
kind: Service
apiVersion: v1
metadata:
name: hello-display
namespace: cifar10
spec:
selector:
app: hello-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
!kubectl apply -f event-display.yaml
%%writefile cifar10.yaml
apiVersion: "serving.kubeflow.org/v1alpha2"
kind: "InferenceService"
metadata:
name: "tfserving-cifar10"
namespace: cifar10
spec:
default:
predictor:
tensorflow:
storageUri: "gs://kfserving-samples/tfserving/cifar10/resnet32"
logger:
mode: all
url: http://broker-ingress.knative-eventing.svc.cluster.local/cifar10/default
!kubectl apply -f cifar10.yaml
%%writefile cifar10od.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: vae-outlier
namespace: cifar10
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
containers:
- image: seldonio/alibi-detect-server:0.0.2
imagePullPolicy: IfNotPresent
args:
- --model_name
- cifar10od
- --http_port
- '8080'
- --protocol
- tensorflow.http
- --storage_uri
- gs://seldon-models/alibi-detect/od/OutlierVAE/cifar10
- --reply_url
- http://hello-display.cifar10
- --event_type
- org.kubeflow.serving.inference.outlier
- --event_source
- org.kubeflow.serving.cifar10od
- OutlierDetector
!kubectl apply -f cifar10od.yaml
%%writefile trigger.yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: vaeoutlier-trigger
namespace: cifar10
spec:
broker: default
filter:
attributes:
type: org.kubeflow.serving.inference.request
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: vae-outlier
namespace: cifar10
!kubectl apply -f trigger.yaml
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP=CLUSTER_IPS[0]
print(CLUSTER_IP)
#CLUSTER_IP="localhost:8080"
SERVICE_HOSTNAMES=!(kubectl get inferenceservice -n cifar10 tfserving-cifar10 -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_CIFAR10=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_CIFAR10)
SERVICE_HOSTNAMES=!(kubectl get ksvc -n cifar10 vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
import matplotlib.pyplot as plt
import numpy as np
import json
import tensorflow as tf
tf.keras.backend.clear_session()
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
import requests
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
def predict(X):
formData = {
'instances': X.tolist()
}
headers = {}
headers["Host"] = SERVICE_HOSTNAME_CIFAR10
res = requests.post('http://'+CLUSTER_IP+'/v1/models/tfserving-cifar10:predict', json=formData, headers=headers)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ",res.status_code)
return []
def outlier(X):
formData = {
'instances': X.tolist()
}
headers = {"Alibi-Detect-Return-Feature-Score":"true","Alibi-Detect-Return-Instance-Score":"true", \
"ce-namespace": "default","ce-modelid":"cifar10","ce-type":"io.seldon.serving.inference.request", \
"ce-id":"1234","ce-source":"localhost","ce-specversion":"1.0"}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
res = requests.post('http://'+CLUSTER_IP+'/', json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ",res.status_code)
return []
idx = 1
X = X_train[idx:idx+1]
show(X)
predict(X)
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[-1]))
print("Outlier",j["data"]["is_outlier"]==[1])
np.random.seed(0)
X_mask, mask = apply_mask(X.reshape(1, 32, 32, 3),
mask_size=(10,10),
n_masks=1,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
show(X_mask)
predict(X_mask)
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[-1]))
print("Outlier",j["data"]["is_outlier"]==[1])
od_preds = outlier(X_mask)
plot_feature_outlier_image(od_preds,
X_mask,
X_recon=None)
!kubectl delete ns cifar10
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup Resources
Step2: Create the Kfserving image classification model for Cifar10. We add in a logger for requests.
Step3: Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the event-display we started.
Step4: Create a Knative trigger to forward logging events to our Outlier Detector.
Step5: Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
Step6: If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following cell.
Step7: Normal Prediction
Step8: After a few seconds you can check the event display for an outlier detection prediction. This should be false.
Step9: Outlier Prediction
Step10: Now lets check the event display for a new message. This should show we have found an outlier.
Step11: We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
Step12: We now plot those feature scores returned by the outlier detector along with our original image.
Step13: Tear Down
|
13,397
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.svm import SVC
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
import numpy as np
#Load data with only two classes
iris = datasets.load_iris()
X = iris.data[:100,:]
y = iris.target[:100]
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
# Create support vector classifier object
svc = SVC(kernel='linear', random_state=0)
# Train classifier
model = svc.fit(X_std, y)
# View support vectors
model.support_vectors_
# View indices of support vectors
model.support_
# View number of support vectors for each class
model.n_support_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Flower Dataset
Step2: Standardize Features
Step3: Train Support Vector Classifier
Step4: View Support Vectors
Step5: View Indices Of Support Vectors
Step6: View Number Of Support Vectors With Each Class
|
13,398
|
<ASSISTANT_TASK:>
Python Code:
from openhunt.mordorutils import *
spark = get_spark()
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/defense_evasion/host/empire_wdigest_downgrade.tar.gz"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, TargetObject
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 13
AND TargetObject LIKE "%UseLogonCredential"
AND Details = 1
'''
)
df.show(10,False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download & Process Mordor Dataset
Step2: Analytic I
|
13,399
|
<ASSISTANT_TASK:>
Python Code:
tables = Gaia.load_tables()
querystart = r"SELECT "
rr_params = ["source_id", "pf", "pf_error", "int_average_g",
"int_average_g_error", "int_average_bp", "int_average_bp_error",
"int_average_rp", "int_average_rp_error", "metallicity", "metallicity_error",
"g_absorption", "best_classification", "peak_to_peak_g"]
gs_params = ["ra", "ra_error", "dec", "dec_error", "parallax", "parallax_error"]
rs_prefix = ', '.join(["rr.{}".format(rr_param) for rr_param in rr_params])
gs_prefix = ', '.join(["gs.{}".format(gs_param) for gs_param in gs_params])
querymid = r" FROM gaiadr2.vari_rrlyrae AS rr JOIN gaiadr2.gaia_source AS gs "
queryobjloc = r"ON gs.source_id=rr.source_id WHERE 1=INTERSECTS (POINT ('ICRS', gs.RA, gs.DEC), "
queryend = "CIRCLE('ICRS',{ra}, {dec}, {radius}))"
completequery = querystart + rs_prefix +', ' + gs_prefix + \
querymid + queryobjloc + queryend
print(completequery)
m62_coords = SkyCoord.from_name('m62')
sdg_coords = SkyCoord.from_name('Sculptor Dwarf Galaxy')
print(m62_coords)
print(sdg_coords)
querysdg = completequery.format(ra=sdg_coords.ra.deg, dec=sdg_coords.dec.deg, radius=0.25)
jobsdg = Gaia.launch_job(query=querysdg)
querym62 = completequery.format(ra=m62_coords.ra.deg, dec=m62_coords.dec.deg, radius=0.25)
jobm62 = Gaia.launch_job(query=querym62)
sdg = jobsdg.get_results() # this returns an astropy Table
sdg = sdg[~sdg['pf'].mask]
sdg = sdg[~sdg['metallicity'].mask]
m62 = jobm62.get_results() # this returns an astropy Table
m62 = m62[~m62['pf'].mask]
m62 = m62[~m62['metallicity'].mask]
m62 = m62[15.<m62['int_average_g']]
m62 = m62[m62['int_average_g'] < 17.]
plt.figure(figsize=(20,5))
plt.subplot(1, 2, 1)
plt.scatter(sdg['ra'], sdg['dec'], s=3)
plt.gca().invert_xaxis()
plt.xlabel("RA")
plt.ylabel("Dec")
plt.title('Sculptor Dwarf Galaxy')
plt.subplot(1, 2, 2)
plt.scatter(m62['ra'], m62['dec'], s=3)
plt.gca().invert_xaxis()
plt.xlabel("RA")
plt.title('M62')
plt.subplots_adjust(wspace=.15)
plt.show()
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
plt.scatter(np.log10(sdg['pf']), sdg['int_average_g'], s=3)
plt.gca().invert_yaxis()
plt.xlabel('Log Period (days)')
plt.ylabel('Apparent magnitude (mag)')
plt.xlim([-0.4, -0.1])
plt.ylim([20.6, 19.6])
plt.title('Sculptor Dwarf Galaxy')
plt.subplot(1, 2, 2)
plt.scatter(np.log10(m62['pf']), m62['int_average_g'], s=3)
plt.gca().invert_yaxis()
plt.xlabel('Log Period (days)')
plt.ylabel('Apparent magnitude (mag)')
plt.xlim([-0.4, -0.1])
plt.ylim([16.3, 15.3])
plt.title('M62')
plt.show()
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
plt.scatter(sdg['metallicity'], sdg['int_average_g'], s=3)
plt.gca().invert_yaxis()
plt.xlabel('Metallicity (dex))')
plt.ylabel('Apparent magnitude (mag)')
plt.xlim([-3.0, 0.3])
plt.ylim([20.6, 19.6])
plt.title('Sculptor Dwarf Galaxy')
plt.subplot(1, 2, 2)
plt.scatter(m62['metallicity'], m62['int_average_g'], s=3)
plt.gca().invert_yaxis()
plt.xlabel('Metallicity (dex)')
plt.ylabel('Apparent magnitude (mag)')
plt.xlim([-3.0, 0.3])
plt.ylim([16.3, 15.3])
plt.title('M62')
plt.show()
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
plt.scatter(sdg['metallicity'], sdg['int_average_g'], s=3)
plt.gca().invert_yaxis()
plt.xlabel('Metallicity (dex))')
plt.ylabel('Apparent magnitude (mag)')
plt.xlim([-3.0, 0.3])
plt.ylim([20.6, 19.6])
sdg_dm = 19.5
xline = np.array([-3., 0.])
yline = xline * 0.32 + 1.11 + sdg_dm
plt.plot(xline, yline)
plt.title('Sculptor Dwarf Galaxy (DM = {dm})'.format(dm=sdg_dm))
plt.subplot(1, 2, 2)
plt.scatter(m62['metallicity'], m62['int_average_g'], s=3)
plt.gca().invert_yaxis()
plt.xlabel('Metallicity (dex)')
plt.ylabel('Apparent magnitude (mag)')
plt.xlim([-3.0, 0.3])
plt.ylim([16.3, 15.3])
m62_dm = 13.7
xline = np.array([-3., 0.])
yline = xline * 0.32 + 1.11 + m62_dm + 1.2
plt.plot(xline, yline)
plt.title('M62 (DM = {dm})'.format(dm=m62_dm))
plt.show()
print(10.**(0.2 * (sdg_dm - m62_dm)))
print(10. * 10.**(0.2 * sdg_dm))
print(10. * 10.**(0.2 * m62_dm))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, let's construst our query. We use the prebuilt rr_lyrae table in Gaia DR2, and get all of the parameters of interest. We will retrieve stars within a small radius around both M62 and Sculptor, so we build those parameters into the query string.
Step2: The complete query will then have the following form.
Step3: astropy's SkyCoord can resolve names to RA and Dec for convenience. This uses the CDS name resolver service.
Step4: Then we can retrieve from the Gaia databases stars within 15 arcminutes of each object.
Step5: Here, we do a few basic cuts, to remove objects in which the period or metallicity have not been measured, and a few (presumed) outliers in M62, perhaps due to RR Lyrae along the line of sight or other errors.
Step6: Just as a sanity check, let us look at the RA and Dec distribution in each case
Step7: Now plot the period-luminosity relation. Already we can see potential problems, since the Sculptor Dwarf Galaxy RR Lyrae are clearly near Gaia's detection limit so may be affected by bias, perhaps as a function of luminosity. Although the RR Lyrae period-luminosity relation is supposed to be weak in the G band, for Sculptor it appears strong. Meanwhile, for M62 there appears to be more scatter than the typical 0.1 mag usually quoted for RR Lyrae.
Step8: Finally, we are just going to eyeball a good fit to each distribution using the relation between $M_G$ and metallicity found in Muraveva et al. (2018), Table 4
Step9: Nevertheless, if we take these distance moduli seriously then the relative distances are
Step10: The individual distances in pc can also be inferred
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.